promptflow

Форк
0
/
prompty-quickstart.ipynb 
357 строк · 9.3 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "metadata": {},
6
   "source": [
7
    "# Getting started with prompty"
8
   ]
9
  },
10
  {
11
   "cell_type": "markdown",
12
   "metadata": {},
13
   "source": [
14
    "\n",
15
    "**Learning Objectives** - Upon completing this tutorial, you should be able to:\n",
16
    "\n",
17
    "- Write LLM application using prompty and visualize the trace of your application.\n",
18
    "- batch run prompty against multi lines of data.\n"
19
   ]
20
  },
21
  {
22
   "cell_type": "markdown",
23
   "metadata": {},
24
   "source": [
25
    "## 0. Install dependent packages"
26
   ]
27
  },
28
  {
29
   "cell_type": "code",
30
   "execution_count": null,
31
   "metadata": {},
32
   "outputs": [],
33
   "source": [
34
    "%%capture --no-stderr\n",
35
    "%pip install promptflow-core"
36
   ]
37
  },
38
  {
39
   "cell_type": "markdown",
40
   "metadata": {},
41
   "source": [
42
    "## 1. Execute a Prompty\n",
43
    "\n",
44
    "Prompty is a file with .prompty extension for developing prompt template. \n",
45
    "The prompty asset is a markdown file with a modified front matter. \n",
46
    "The front matter is in yaml format that contains a number of metadata fields which defines model configuration and expected inputs of the prompty."
47
   ]
48
  },
49
  {
50
   "cell_type": "code",
51
   "execution_count": null,
52
   "metadata": {},
53
   "outputs": [],
54
   "source": [
55
    "with open(\"basic.prompty\") as fin:\n",
56
    "    print(fin.read())"
57
   ]
58
  },
59
  {
60
   "cell_type": "markdown",
61
   "metadata": {},
62
   "source": [
63
    "Note: before running below cell, please configure required environment variable `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT` by create an `.env` file. Please refer to `../.env.example` as an template.\n"
64
   ]
65
  },
66
  {
67
   "cell_type": "code",
68
   "execution_count": null,
69
   "metadata": {},
70
   "outputs": [],
71
   "source": [
72
    "import os\n",
73
    "from dotenv import load_dotenv\n",
74
    "\n",
75
    "if \"AZURE_OPENAI_API_KEY\" not in os.environ:\n",
76
    "    # load environment variables from .env file\n",
77
    "    load_dotenv()"
78
   ]
79
  },
80
  {
81
   "cell_type": "code",
82
   "execution_count": null,
83
   "metadata": {},
84
   "outputs": [],
85
   "source": [
86
    "from promptflow.core import Prompty\n",
87
    "\n",
88
    "# load prompty as a flow\n",
89
    "f = Prompty.load(source=\"basic.prompty\")\n",
90
    "\n",
91
    "# execute the flow as function\n",
92
    "result = f(question=\"What is the capital of France?\")\n",
93
    "result"
94
   ]
95
  },
96
  {
97
   "cell_type": "markdown",
98
   "metadata": {},
99
   "source": [
100
    "You can override configuration with `AzureOpenAIModelConfiguration` and `OpenAIModelConfiguration`."
101
   ]
102
  },
103
  {
104
   "cell_type": "code",
105
   "execution_count": null,
106
   "metadata": {},
107
   "outputs": [],
108
   "source": [
109
    "from promptflow.core import AzureOpenAIModelConfiguration, OpenAIModelConfiguration\n",
110
    "\n",
111
    "# override configuration with AzureOpenAIModelConfiguration\n",
112
    "configuration = AzureOpenAIModelConfiguration(\n",
113
    "    # azure_endpoint=\"${env:AZURE_OPENAI_ENDPOINT}\",  # Use ${env:<ENV_NAME>} to surround the environment variable name.\n",
114
    "    # api_key=\"${env:AZURE_OPENAI_API_KEY}\",\n",
115
    "    azure_deployment=\"gpt-35-turbo-0125\",\n",
116
    ")\n",
117
    "\n",
118
    "# override configuration with OpenAIModelConfiguration\n",
119
    "# configuration = OpenAIModelConfiguration(\n",
120
    "#     base_url=\"${env:OPENAI_BASE_URL}\",\n",
121
    "#     api_key=\"${env:OPENAI_API_KEY}\",\n",
122
    "#     model=\"gpt-3.5-turbo\"\n",
123
    "# )\n",
124
    "\n",
125
    "override_model = {\"configuration\": configuration, \"parameters\": {\"max_tokens\": 512}}\n",
126
    "\n",
127
    "# load prompty as a flow\n",
128
    "f = Prompty.load(source=\"basic.prompty\", model=override_model)\n",
129
    "\n",
130
    "# execute the flow as function\n",
131
    "result = f(question=\"What is the capital of France?\")\n",
132
    "result"
133
   ]
134
  },
135
  {
136
   "cell_type": "markdown",
137
   "metadata": {},
138
   "source": [
139
    "### Visualize trace by using start_trace"
140
   ]
141
  },
142
  {
143
   "cell_type": "code",
144
   "execution_count": null,
145
   "metadata": {},
146
   "outputs": [],
147
   "source": [
148
    "from promptflow.tracing import start_trace\n",
149
    "\n",
150
    "# start a trace session, and print a url for user to check trace\n",
151
    "start_trace()"
152
   ]
153
  },
154
  {
155
   "cell_type": "markdown",
156
   "metadata": {},
157
   "source": [
158
    "Re-run below cell will collect a trace in trace UI."
159
   ]
160
  },
161
  {
162
   "cell_type": "code",
163
   "execution_count": null,
164
   "metadata": {},
165
   "outputs": [],
166
   "source": [
167
    "# rerun the function, which will be recorded in the trace\n",
168
    "question = \"What is the capital of Japan?\"\n",
169
    "ground_truth = \"Tokyo\"\n",
170
    "result = f(question=question)\n",
171
    "result"
172
   ]
173
  },
174
  {
175
   "cell_type": "markdown",
176
   "metadata": {},
177
   "source": [
178
    "### Eval the result \n",
179
    "\n",
180
    "Note: the eval flow returns a `json_object`. You need a new model version like [gpt-35-turbo (0125)](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#gpt-35-models) to use the `json_object` response_format feature."
181
   ]
182
  },
183
  {
184
   "cell_type": "code",
185
   "execution_count": null,
186
   "metadata": {},
187
   "outputs": [],
188
   "source": [
189
    "# load prompty as a flow\n",
190
    "eval_flow = Prompty.load(\"../eval-basic/eval.prompty\")\n",
191
    "# execute the flow as function\n",
192
    "result = eval_flow(question=question, ground_truth=ground_truth, answer=result)\n",
193
    "result"
194
   ]
195
  },
196
  {
197
   "cell_type": "markdown",
198
   "metadata": {},
199
   "source": [
200
    "## 2. Batch run with multi-line data\n"
201
   ]
202
  },
203
  {
204
   "cell_type": "code",
205
   "execution_count": null,
206
   "metadata": {},
207
   "outputs": [],
208
   "source": [
209
    "%%capture --no-stderr\n",
210
    "# batch run requires promptflow-devkit package\n",
211
    "%pip install promptflow-devkit"
212
   ]
213
  },
214
  {
215
   "cell_type": "code",
216
   "execution_count": null,
217
   "metadata": {},
218
   "outputs": [],
219
   "source": [
220
    "from promptflow.client import PFClient\n",
221
    "\n",
222
    "pf = PFClient()"
223
   ]
224
  },
225
  {
226
   "cell_type": "code",
227
   "execution_count": null,
228
   "metadata": {},
229
   "outputs": [],
230
   "source": [
231
    "flow = \"./basic.prompty\"  # path to the prompty file\n",
232
    "data = \"./data.jsonl\"  # path to the data file\n",
233
    "\n",
234
    "# create run with the flow and data\n",
235
    "base_run = pf.run(\n",
236
    "    flow=flow,\n",
237
    "    data=data,\n",
238
    "    column_mapping={\n",
239
    "        \"question\": \"${data.question}\",\n",
240
    "    },\n",
241
    "    stream=True,\n",
242
    ")"
243
   ]
244
  },
245
  {
246
   "cell_type": "code",
247
   "execution_count": null,
248
   "metadata": {},
249
   "outputs": [],
250
   "source": [
251
    "details = pf.get_details(base_run)\n",
252
    "details.head(10)"
253
   ]
254
  },
255
  {
256
   "cell_type": "markdown",
257
   "metadata": {},
258
   "source": [
259
    "## 3. Evaluate your flow\n",
260
    "Then you can use an evaluation method to evaluate your flow. The evaluation methods are also flows which usually using LLM assert the produced output matches certain expectation. "
261
   ]
262
  },
263
  {
264
   "cell_type": "markdown",
265
   "metadata": {},
266
   "source": [
267
    "### Run evaluation on the previous batch run\n",
268
    "The **base_run** is the batch run we completed in step 2 above, for web-classification flow with \"data.jsonl\" as input."
269
   ]
270
  },
271
  {
272
   "cell_type": "code",
273
   "execution_count": null,
274
   "metadata": {},
275
   "outputs": [],
276
   "source": [
277
    "eval_prompty = \"../eval-basic/eval.prompty\"\n",
278
    "\n",
279
    "eval_run = pf.run(\n",
280
    "    flow=eval_prompty,\n",
281
    "    data=\"./data.jsonl\",  # path to the data file\n",
282
    "    run=base_run,  # specify base_run as the run you want to evaluate\n",
283
    "    column_mapping={\n",
284
    "        \"question\": \"${data.question}\",\n",
285
    "        \"answer\": \"${run.outputs.output}\",  # TODO refine this mapping\n",
286
    "        \"ground_truth\": \"${data.ground_truth}\",\n",
287
    "    },\n",
288
    "    stream=True,\n",
289
    ")"
290
   ]
291
  },
292
  {
293
   "cell_type": "code",
294
   "execution_count": null,
295
   "metadata": {},
296
   "outputs": [],
297
   "source": [
298
    "details = pf.get_details(eval_run)\n",
299
    "details.head(10)"
300
   ]
301
  },
302
  {
303
   "cell_type": "code",
304
   "execution_count": null,
305
   "metadata": {},
306
   "outputs": [],
307
   "source": [
308
    "# visualize run using ui\n",
309
    "pf.visualize([base_run, eval_run])"
310
   ]
311
  },
312
  {
313
   "cell_type": "markdown",
314
   "metadata": {},
315
   "source": [
316
    "## Next steps\n",
317
    "\n",
318
    "By now you've successfully run your first prompt flow and even did evaluation on it. That's great!\n",
319
    "\n",
320
    "You can check out more examples:\n",
321
    "- [Basic Chat](https://github.com/microsoft/promptflow/tree/main/examples/prompty/chat-basic): demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message."
322
   ]
323
  }
324
 ],
325
 "metadata": {
326
  "build_doc": {
327
   "author": [
328
    "lalala123123@github.com",
329
    "wangchao1230@github.com"
330
   ],
331
   "category": "local",
332
   "section": "Prompty",
333
   "weight": 10
334
  },
335
  "description": "A quickstart tutorial to run a prompty and evaluate it.",
336
  "kernelspec": {
337
   "display_name": "prompt_flow",
338
   "language": "python",
339
   "name": "python3"
340
  },
341
  "language_info": {
342
   "codemirror_mode": {
343
    "name": "ipython",
344
    "version": 3
345
   },
346
   "file_extension": ".py",
347
   "mimetype": "text/x-python",
348
   "name": "python",
349
   "nbconvert_exporter": "python",
350
   "pygments_lexer": "ipython3",
351
   "version": "3.9.18"
352
  },
353
  "resources": "examples/requirements.txt, examples/prompty/basic, examples/prompty/eval-basic"
354
 },
355
 "nbformat": 4,
356
 "nbformat_minor": 2
357
}
358

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.