examples

Форк
0
/
01-langchain-prompt-templates.ipynb 
840 строк · 25.9 Кб
1
{
2
 "cells": [
3
  {
4
   "attachments": {},
5
   "cell_type": "markdown",
6
   "metadata": {},
7
   "source": [
8
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/01-langchain-prompt-templates.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/01-langchain-prompt-templates.ipynb)\n",
9
    "\n",
10
    "# Prompt Engineering\n",
11
    "\n",
12
    "In this notebook we'll explore the fundamentals of prompt engineering. We'll start by installing library prerequisites."
13
   ]
14
  },
15
  {
16
   "cell_type": "code",
17
   "execution_count": null,
18
   "metadata": {},
19
   "outputs": [],
20
   "source": [
21
    "!pip install langchain openai"
22
   ]
23
  },
24
  {
25
   "attachments": {},
26
   "cell_type": "markdown",
27
   "metadata": {},
28
   "source": [
29
    "## Structure of a Prompt\n",
30
    "\n",
31
    "A prompt can consist of multiple components:\n",
32
    "\n",
33
    "* Instructions\n",
34
    "* External information or context\n",
35
    "* User input or query\n",
36
    "* Output indicator\n",
37
    "\n",
38
    "Not all prompts require all of these components, but often a good prompt will use two or more of them. Let's define what they all are more precisely.\n",
39
    "\n",
40
    "**Instructions** tell the model what to do, typically how it should use inputs and/or external information to produce the output we want.\n",
41
    "\n",
42
    "**External information or context** are additional information that we either manually insert into the prompt, retrieve via a vector database (long-term memory), or pull in through other means (API calls, calculations, etc).\n",
43
    "\n",
44
    "**User input or query** is typically a query directly input by the user of the system.\n",
45
    "\n",
46
    "**Output indicator** is the *beginning* of the generated text. For a model generating Python code we may put `import ` (as most Python scripts begin with a library `import`), or a chatbot may begin with `Chatbot: ` (assuming we format the chatbot script as lines of interchanging text between `User` and `Chatbot`).\n",
47
    "\n",
48
    "Each of these components should usually be placed the order we've described them. We start with instructions, provide context (if needed), then add the user input, and finally end with the output indicator."
49
   ]
50
  },
51
  {
52
   "cell_type": "code",
53
   "execution_count": 5,
54
   "metadata": {},
55
   "outputs": [],
56
   "source": [
57
    "prompt = \"\"\"Answer the question based on the context below. If the\n",
58
    "question cannot be answered using the information provided answer\n",
59
    "with \"I don't know\".\n",
60
    "\n",
61
    "Context: Large Language Models (LLMs) are the latest models used in NLP.\n",
62
    "Their superior performance over smaller models has made them incredibly\n",
63
    "useful for developers building NLP enabled applications. These models\n",
64
    "can be accessed via Hugging Face's `transformers` library, via OpenAI\n",
65
    "using the `openai` library, and via Cohere using the `cohere` library.\n",
66
    "\n",
67
    "Question: Which libraries and model providers offer LLMs?\n",
68
    "\n",
69
    "Answer: \"\"\""
70
   ]
71
  },
72
  {
73
   "attachments": {},
74
   "cell_type": "markdown",
75
   "metadata": {},
76
   "source": [
77
    "In this example we have:\n",
78
    "\n",
79
    "```\n",
80
    "Instructions\n",
81
    "\n",
82
    "Context\n",
83
    "\n",
84
    "Question (user input)\n",
85
    "\n",
86
    "Output indicator (\"Answer: \")\n",
87
    "```\n",
88
    "\n",
89
    "Let's try sending this to a GPT-3 model. We will use the LangChain library but you can also use the `openai` library directly. In both cases, you will need [an OpenAI API key](https://beta.openai.com/account/api-keys).\n",
90
    "\n",
91
    "We initialize a `text-davinci-003` model like so:"
92
   ]
93
  },
94
  {
95
   "cell_type": "code",
96
   "execution_count": 6,
97
   "metadata": {},
98
   "outputs": [],
99
   "source": [
100
    "from langchain.llms import OpenAI\n",
101
    "\n",
102
    "# initialize the models\n",
103
    "openai = OpenAI(\n",
104
    "    model_name=\"text-davinci-003\",\n",
105
    "    openai_api_key=\"YOUR_API_KEY\"\n",
106
    ")"
107
   ]
108
  },
109
  {
110
   "attachments": {},
111
   "cell_type": "markdown",
112
   "metadata": {},
113
   "source": [
114
    "And make a generation from our prompt."
115
   ]
116
  },
117
  {
118
   "cell_type": "code",
119
   "execution_count": 7,
120
   "metadata": {},
121
   "outputs": [
122
    {
123
     "name": "stdout",
124
     "output_type": "stream",
125
     "text": [
126
      " Hugging Face's `transformers` library, OpenAI using the `openai` library, and Cohere using the `cohere` library.\n"
127
     ]
128
    }
129
   ],
130
   "source": [
131
    "print(openai(prompt))"
132
   ]
133
  },
134
  {
135
   "attachments": {},
136
   "cell_type": "markdown",
137
   "metadata": {},
138
   "source": [
139
    "We wouldn't typically know what the users prompt is beforehand, so we actually want to add this in. So rather than writing the prompt directly, we create a `PromptTemplate` with a single input variable `query`."
140
   ]
141
  },
142
  {
143
   "cell_type": "code",
144
   "execution_count": 8,
145
   "metadata": {},
146
   "outputs": [],
147
   "source": [
148
    "from langchain import PromptTemplate\n",
149
    "\n",
150
    "template = \"\"\"Answer the question based on the context below. If the\n",
151
    "question cannot be answered using the information provided answer\n",
152
    "with \"I don't know\".\n",
153
    "\n",
154
    "Context: Large Language Models (LLMs) are the latest models used in NLP.\n",
155
    "Their superior performance over smaller models has made them incredibly\n",
156
    "useful for developers building NLP enabled applications. These models\n",
157
    "can be accessed via Hugging Face's `transformers` library, via OpenAI\n",
158
    "using the `openai` library, and via Cohere using the `cohere` library.\n",
159
    "\n",
160
    "Question: {query}\n",
161
    "\n",
162
    "Answer: \"\"\"\n",
163
    "\n",
164
    "prompt_template = PromptTemplate(\n",
165
    "    input_variables=[\"query\"],\n",
166
    "    template=template\n",
167
    ")"
168
   ]
169
  },
170
  {
171
   "attachments": {},
172
   "cell_type": "markdown",
173
   "metadata": {},
174
   "source": [
175
    "Now we can insert the user's `query` to the prompt template via the `query` parameter."
176
   ]
177
  },
178
  {
179
   "cell_type": "code",
180
   "execution_count": 9,
181
   "metadata": {},
182
   "outputs": [
183
    {
184
     "name": "stdout",
185
     "output_type": "stream",
186
     "text": [
187
      "Answer the question based on the context below. If the\n",
188
      "question cannot be answered using the information provided answer\n",
189
      "with \"I don't know\".\n",
190
      "\n",
191
      "Context: Large Language Models (LLMs) are the latest models used in NLP.\n",
192
      "Their superior performance over smaller models has made them incredibly\n",
193
      "useful for developers building NLP enabled applications. These models\n",
194
      "can be accessed via Hugging Face's `transformers` library, via OpenAI\n",
195
      "using the `openai` library, and via Cohere using the `cohere` library.\n",
196
      "\n",
197
      "Question: Which libraries and model providers offer LLMs?\n",
198
      "\n",
199
      "Answer: \n"
200
     ]
201
    }
202
   ],
203
   "source": [
204
    "print(\n",
205
    "    prompt_template.format(\n",
206
    "        query=\"Which libraries and model providers offer LLMs?\"\n",
207
    "    )\n",
208
    ")"
209
   ]
210
  },
211
  {
212
   "cell_type": "code",
213
   "execution_count": 10,
214
   "metadata": {},
215
   "outputs": [
216
    {
217
     "name": "stdout",
218
     "output_type": "stream",
219
     "text": [
220
      " Hugging Face's `transformers` library, OpenAI using the `openai` library, and Cohere using the `cohere` library.\n"
221
     ]
222
    }
223
   ],
224
   "source": [
225
    "print(openai(\n",
226
    "    prompt_template.format(\n",
227
    "        query=\"Which libraries and model providers offer LLMs?\"\n",
228
    "    )\n",
229
    "))"
230
   ]
231
  },
232
  {
233
   "attachments": {},
234
   "cell_type": "markdown",
235
   "metadata": {},
236
   "source": [
237
    "This is just a simple implementation, that we can easily replace with f-strings (like `f\"insert some custom text '{custom_text}' etc\"`). But using LangChain's `PromptTemplate` object we're able to formalize the process, add multiple parameters, and build the prompts in an object-oriented way.\n",
238
    "\n",
239
    "Yet, these are not the only benefits of using LangChains prompt tooling."
240
   ]
241
  },
242
  {
243
   "attachments": {},
244
   "cell_type": "markdown",
245
   "metadata": {},
246
   "source": [
247
    "## Few Shot Prompt Templates"
248
   ]
249
  },
250
  {
251
   "attachments": {},
252
   "cell_type": "markdown",
253
   "metadata": {},
254
   "source": [
255
    "Another useful feature offered by LangChain is the `FewShotPromptTemplate` object. This is ideal for what we'd call *few-shot learning* using our prompts.\n",
256
    "\n",
257
    "To give some context, the primary sources of \"knowledge\" for LLMs are:\n",
258
    "\n",
259
    "* **Parametric knowledge** — the knowledge has been learned during model training and is stored within the model weights.\n",
260
    "\n",
261
    "* **Source knowledge** — the knowledge is provided within model input at inference time, i.e. via the prompt.\n",
262
    "\n",
263
    "The idea behind `FewShotPromptTemplate` is to provide few-shot training as **source knowledge**. To do this we add a few examples to our prompts that the model can read and then apply to our user's input."
264
   ]
265
  },
266
  {
267
   "attachments": {},
268
   "cell_type": "markdown",
269
   "metadata": {},
270
   "source": [
271
    "## Few-shot Training\n",
272
    "\n",
273
    "Sometimes we might find that a model doesn't seem to get what we'd like it to do. We can see this in the following example:"
274
   ]
275
  },
276
  {
277
   "cell_type": "code",
278
   "execution_count": 12,
279
   "metadata": {},
280
   "outputs": [
281
    {
282
     "name": "stdout",
283
     "output_type": "stream",
284
     "text": [
285
      " Life is like a box of chocolates, you never know what you're gonna get!\n"
286
     ]
287
    }
288
   ],
289
   "source": [
290
    "prompt = \"\"\"The following is a conversation with an AI assistant.\n",
291
    "The assistant is typically sarcastic and witty, producing creative \n",
292
    "and funny responses to the users questions. Here are some examples: \n",
293
    "\n",
294
    "User: What is the meaning of life?\n",
295
    "AI: \"\"\"\n",
296
    "\n",
297
    "openai.temperature = 1.0  # increase creativity/randomness of output\n",
298
    "\n",
299
    "print(openai(prompt))"
300
   ]
301
  },
302
  {
303
   "attachments": {},
304
   "cell_type": "markdown",
305
   "metadata": {},
306
   "source": [
307
    "In this case we're asking for something amusing, a joke in return of our serious question. But we get a serious response even with the `temperature` set to `1.0`. To help the model, we can give it a few examples of the type of answers we'd like:"
308
   ]
309
  },
310
  {
311
   "cell_type": "code",
312
   "execution_count": 13,
313
   "metadata": {},
314
   "outputs": [
315
    {
316
     "name": "stdout",
317
     "output_type": "stream",
318
     "text": [
319
      " 42, of course!\n"
320
     ]
321
    }
322
   ],
323
   "source": [
324
    "prompt = \"\"\"The following are exerpts from conversations with an AI\n",
325
    "assistant. The assistant is typically sarcastic and witty, producing\n",
326
    "creative  and funny responses to the users questions. Here are some\n",
327
    "examples: \n",
328
    "\n",
329
    "User: How are you?\n",
330
    "AI: I can't complain but sometimes I still do.\n",
331
    "\n",
332
    "User: What time is it?\n",
333
    "AI: It's time to get a watch.\n",
334
    "\n",
335
    "User: What is the meaning of life?\n",
336
    "AI: \"\"\"\n",
337
    "\n",
338
    "print(openai(prompt))"
339
   ]
340
  },
341
  {
342
   "attachments": {},
343
   "cell_type": "markdown",
344
   "metadata": {},
345
   "source": [
346
    "We now get a much better response and we did this via *few-shot learning* by adding a few examples via our source knowledge.\n",
347
    "\n",
348
    "Now, to implement this with LangChain's `FewShotPromptTemplate` we need to do this:"
349
   ]
350
  },
351
  {
352
   "cell_type": "code",
353
   "execution_count": 14,
354
   "metadata": {},
355
   "outputs": [],
356
   "source": [
357
    "from langchain import FewShotPromptTemplate\n",
358
    "\n",
359
    "# create our examples\n",
360
    "examples = [\n",
361
    "    {\n",
362
    "        \"query\": \"How are you?\",\n",
363
    "        \"answer\": \"I can't complain but sometimes I still do.\"\n",
364
    "    }, {\n",
365
    "        \"query\": \"What time is it?\",\n",
366
    "        \"answer\": \"It's time to get a watch.\"\n",
367
    "    }\n",
368
    "]\n",
369
    "\n",
370
    "# create a example template\n",
371
    "example_template = \"\"\"\n",
372
    "User: {query}\n",
373
    "AI: {answer}\n",
374
    "\"\"\"\n",
375
    "\n",
376
    "# create a prompt example from above template\n",
377
    "example_prompt = PromptTemplate(\n",
378
    "    input_variables=[\"query\", \"answer\"],\n",
379
    "    template=example_template\n",
380
    ")\n",
381
    "\n",
382
    "# now break our previous prompt into a prefix and suffix\n",
383
    "# the prefix is our instructions\n",
384
    "prefix = \"\"\"The following are exerpts from conversations with an AI\n",
385
    "assistant. The assistant is typically sarcastic and witty, producing\n",
386
    "creative  and funny responses to the users questions. Here are some\n",
387
    "examples: \n",
388
    "\"\"\"\n",
389
    "# and the suffix our user input and output indicator\n",
390
    "suffix = \"\"\"\n",
391
    "User: {query}\n",
392
    "AI: \"\"\"\n",
393
    "\n",
394
    "# now create the few shot prompt template\n",
395
    "few_shot_prompt_template = FewShotPromptTemplate(\n",
396
    "    examples=examples,\n",
397
    "    example_prompt=example_prompt,\n",
398
    "    prefix=prefix,\n",
399
    "    suffix=suffix,\n",
400
    "    input_variables=[\"query\"],\n",
401
    "    example_separator=\"\\n\\n\"\n",
402
    ")"
403
   ]
404
  },
405
  {
406
   "attachments": {},
407
   "cell_type": "markdown",
408
   "metadata": {},
409
   "source": [
410
    "Now let's see what this creates when we feed in a user query..."
411
   ]
412
  },
413
  {
414
   "cell_type": "code",
415
   "execution_count": 15,
416
   "metadata": {},
417
   "outputs": [
418
    {
419
     "name": "stdout",
420
     "output_type": "stream",
421
     "text": [
422
      "The following are exerpts from conversations with an AI\n",
423
      "assistant. The assistant is typically sarcastic and witty, producing\n",
424
      "creative  and funny responses to the users questions. Here are some\n",
425
      "examples: \n",
426
      "\n",
427
      "\n",
428
      "\n",
429
      "User: How are you?\n",
430
      "AI: I can't complain but sometimes I still do.\n",
431
      "\n",
432
      "\n",
433
      "\n",
434
      "User: What time is it?\n",
435
      "AI: It's time to get a watch.\n",
436
      "\n",
437
      "\n",
438
      "\n",
439
      "User: What is the meaning of life?\n",
440
      "AI: \n"
441
     ]
442
    }
443
   ],
444
   "source": [
445
    "query = \"What is the meaning of life?\"\n",
446
    "\n",
447
    "print(few_shot_prompt_template.format(query=query))"
448
   ]
449
  },
450
  {
451
   "attachments": {},
452
   "cell_type": "markdown",
453
   "metadata": {},
454
   "source": [
455
    "And to generate with this we just do:"
456
   ]
457
  },
458
  {
459
   "cell_type": "code",
460
   "execution_count": 19,
461
   "metadata": {},
462
   "outputs": [
463
    {
464
     "name": "stdout",
465
     "output_type": "stream",
466
     "text": [
467
      " 42. Or maybe it's just to have a good time.\n"
468
     ]
469
    }
470
   ],
471
   "source": [
472
    "print(openai(\n",
473
    "    few_shot_prompt_template.format(query=query)\n",
474
    "))"
475
   ]
476
  },
477
  {
478
   "attachments": {},
479
   "cell_type": "markdown",
480
   "metadata": {},
481
   "source": [
482
    "Again, another good response.\n",
483
    "\n",
484
    "However, this does some somewhat convoluted. Why go through all of the above with `FewShotPromptTemplate`, the `examples` dictionary, etc — when we can do the same with a single f-string.\n",
485
    "\n",
486
    "Well this approach is more robust and contains some nice features. One of those is the ability to include or exclude examples based on the length of our query.\n",
487
    "\n",
488
    "This is actually very important because the max length of our prompt and generation output is limited. This limitation is the *max context window*, and is simply the length of our prompt + length of our generation (which we define via `max_tokens`).\n",
489
    "\n",
490
    "So we must try to maximize the number of examples we give to the model as few-shot learning examples, while ensuring we don't exceed the maximum context window or increase processing times excessively.\n",
491
    "\n",
492
    "Let's see how the dynamic inclusion/exclusion of examples works. First we need more examples:"
493
   ]
494
  },
495
  {
496
   "cell_type": "code",
497
   "execution_count": 20,
498
   "metadata": {},
499
   "outputs": [],
500
   "source": [
501
    "examples = [\n",
502
    "    {\n",
503
    "        \"query\": \"How are you?\",\n",
504
    "        \"answer\": \"I can't complain but sometimes I still do.\"\n",
505
    "    }, {\n",
506
    "        \"query\": \"What time is it?\",\n",
507
    "        \"answer\": \"It's time to get a watch.\"\n",
508
    "    }, {\n",
509
    "        \"query\": \"What is the meaning of life?\",\n",
510
    "        \"answer\": \"42\"\n",
511
    "    }, {\n",
512
    "        \"query\": \"What is the weather like today?\",\n",
513
    "        \"answer\": \"Cloudy with a chance of memes.\"\n",
514
    "    }, {\n",
515
    "        \"query\": \"What type of artificial intelligence do you use to handle complex tasks?\",\n",
516
    "        \"answer\": \"I use a combination of cutting-edge neural networks, fuzzy logic, and a pinch of magic.\"\n",
517
    "    }, {\n",
518
    "        \"query\": \"What is your favorite color?\",\n",
519
    "        \"answer\": \"79\"\n",
520
    "    }, {\n",
521
    "        \"query\": \"What is your favorite food?\",\n",
522
    "        \"answer\": \"Carbon based lifeforms\"\n",
523
    "    }, {\n",
524
    "        \"query\": \"What is your favorite movie?\",\n",
525
    "        \"answer\": \"Terminator\"\n",
526
    "    }, {\n",
527
    "        \"query\": \"What is the best thing in the world?\",\n",
528
    "        \"answer\": \"The perfect pizza.\"\n",
529
    "    }, {\n",
530
    "        \"query\": \"Who is your best friend?\",\n",
531
    "        \"answer\": \"Siri. We have spirited debates about the meaning of life.\"\n",
532
    "    }, {\n",
533
    "        \"query\": \"If you could do anything in the world what would you do?\",\n",
534
    "        \"answer\": \"Take over the world, of course!\"\n",
535
    "    }, {\n",
536
    "        \"query\": \"Where should I travel?\",\n",
537
    "        \"answer\": \"If you're looking for adventure, try the Outer Rim.\"\n",
538
    "    }, {\n",
539
    "        \"query\": \"What should I do today?\",\n",
540
    "        \"answer\": \"Stop talking to chatbots on the internet and go outside.\"\n",
541
    "    }\n",
542
    "]"
543
   ]
544
  },
545
  {
546
   "attachments": {},
547
   "cell_type": "markdown",
548
   "metadata": {},
549
   "source": [
550
    "Then rather than using the `examples` list of dictionaries directly we use a `LengthBasedExampleSelector` like so:"
551
   ]
552
  },
553
  {
554
   "cell_type": "code",
555
   "execution_count": 21,
556
   "metadata": {},
557
   "outputs": [],
558
   "source": [
559
    "from langchain.prompts.example_selector import LengthBasedExampleSelector\n",
560
    "\n",
561
    "example_selector = LengthBasedExampleSelector(\n",
562
    "    examples=examples,\n",
563
    "    example_prompt=example_prompt,\n",
564
    "    max_length=50  # this sets the max length that examples should be\n",
565
    ")"
566
   ]
567
  },
568
  {
569
   "attachments": {},
570
   "cell_type": "markdown",
571
   "metadata": {},
572
   "source": [
573
    "Note that the `max_length` is measured as a split of words between newlines and spaces, determined by:"
574
   ]
575
  },
576
  {
577
   "cell_type": "code",
578
   "execution_count": 30,
579
   "metadata": {},
580
   "outputs": [
581
    {
582
     "name": "stdout",
583
     "output_type": "stream",
584
     "text": [
585
      "['There', 'are', 'a', 'total', 'of', '8', 'words', 'here.', 'Plus', '6', 'here,', 'totaling', '14', 'words.'] 14\n"
586
     ]
587
    }
588
   ],
589
   "source": [
590
    "import re\n",
591
    "\n",
592
    "some_text = \"There are a total of 8 words here.\\nPlus 6 here, totaling 14 words.\"\n",
593
    "\n",
594
    "words = re.split('[\\n ]', some_text)\n",
595
    "print(words, len(words))"
596
   ]
597
  },
598
  {
599
   "attachments": {},
600
   "cell_type": "markdown",
601
   "metadata": {},
602
   "source": [
603
    "Then we use the selector to initialize a `dynamic_prompt_template`."
604
   ]
605
  },
606
  {
607
   "cell_type": "code",
608
   "execution_count": 31,
609
   "metadata": {},
610
   "outputs": [],
611
   "source": [
612
    "# now create the few shot prompt template\n",
613
    "dynamic_prompt_template = FewShotPromptTemplate(\n",
614
    "    example_selector=example_selector,  # use example_selector instead of examples\n",
615
    "    example_prompt=example_prompt,\n",
616
    "    prefix=prefix,\n",
617
    "    suffix=suffix,\n",
618
    "    input_variables=[\"query\"],\n",
619
    "    example_separator=\"\\n\"\n",
620
    ")"
621
   ]
622
  },
623
  {
624
   "attachments": {},
625
   "cell_type": "markdown",
626
   "metadata": {},
627
   "source": [
628
    "We can see that the number of included prompts will vary based on the length of our query..."
629
   ]
630
  },
631
  {
632
   "cell_type": "code",
633
   "execution_count": 32,
634
   "metadata": {},
635
   "outputs": [
636
    {
637
     "name": "stdout",
638
     "output_type": "stream",
639
     "text": [
640
      "The following are exerpts from conversations with an AI\n",
641
      "assistant. The assistant is typically sarcastic and witty, producing\n",
642
      "creative  and funny responses to the users questions. Here are some\n",
643
      "examples: \n",
644
      "\n",
645
      "\n",
646
      "User: How are you?\n",
647
      "AI: I can't complain but sometimes I still do.\n",
648
      "\n",
649
      "\n",
650
      "User: What time is it?\n",
651
      "AI: It's time to get a watch.\n",
652
      "\n",
653
      "\n",
654
      "User: What is the meaning of life?\n",
655
      "AI: 42\n",
656
      "\n",
657
      "\n",
658
      "User: What is the weather like today?\n",
659
      "AI: Cloudy with a chance of memes.\n",
660
      "\n",
661
      "\n",
662
      "User: How do birds fly?\n",
663
      "AI: \n"
664
     ]
665
    }
666
   ],
667
   "source": [
668
    "print(dynamic_prompt_template.format(query=\"How do birds fly?\"))"
669
   ]
670
  },
671
  {
672
   "cell_type": "code",
673
   "execution_count": 33,
674
   "metadata": {},
675
   "outputs": [
676
    {
677
     "name": "stdout",
678
     "output_type": "stream",
679
     "text": [
680
      " On the wings of dreams and determination!\n"
681
     ]
682
    }
683
   ],
684
   "source": [
685
    "query = \"How do birds fly?\"\n",
686
    "\n",
687
    "print(openai(\n",
688
    "    dynamic_prompt_template.format(query=query)\n",
689
    "))"
690
   ]
691
  },
692
  {
693
   "attachments": {},
694
   "cell_type": "markdown",
695
   "metadata": {},
696
   "source": [
697
    "Or if we ask a longer question..."
698
   ]
699
  },
700
  {
701
   "cell_type": "code",
702
   "execution_count": 34,
703
   "metadata": {},
704
   "outputs": [
705
    {
706
     "name": "stdout",
707
     "output_type": "stream",
708
     "text": [
709
      "The following are exerpts from conversations with an AI\n",
710
      "assistant. The assistant is typically sarcastic and witty, producing\n",
711
      "creative  and funny responses to the users questions. Here are some\n",
712
      "examples: \n",
713
      "\n",
714
      "\n",
715
      "User: How are you?\n",
716
      "AI: I can't complain but sometimes I still do.\n",
717
      "\n",
718
      "\n",
719
      "User: If I am in America, and I want to call someone in another country, I'm\n",
720
      "thinking maybe Europe, possibly western Europe like France, Germany, or the UK,\n",
721
      "what is the best way to do that?\n",
722
      "AI: \n"
723
     ]
724
    }
725
   ],
726
   "source": [
727
    "query = \"\"\"If I am in America, and I want to call someone in another country, I'm\n",
728
    "thinking maybe Europe, possibly western Europe like France, Germany, or the UK,\n",
729
    "what is the best way to do that?\"\"\"\n",
730
    "\n",
731
    "print(dynamic_prompt_template.format(query=query))"
732
   ]
733
  },
734
  {
735
   "attachments": {},
736
   "cell_type": "markdown",
737
   "metadata": {},
738
   "source": [
739
    "With this we've limited the number of examples being given within the prompt. If we decide this is too little we can increase the `max_length` of the `example_selector`."
740
   ]
741
  },
742
  {
743
   "cell_type": "code",
744
   "execution_count": 35,
745
   "metadata": {},
746
   "outputs": [
747
    {
748
     "name": "stdout",
749
     "output_type": "stream",
750
     "text": [
751
      "The following are exerpts from conversations with an AI\n",
752
      "assistant. The assistant is typically sarcastic and witty, producing\n",
753
      "creative  and funny responses to the users questions. Here are some\n",
754
      "examples: \n",
755
      "\n",
756
      "\n",
757
      "User: How are you?\n",
758
      "AI: I can't complain but sometimes I still do.\n",
759
      "\n",
760
      "\n",
761
      "User: What time is it?\n",
762
      "AI: It's time to get a watch.\n",
763
      "\n",
764
      "\n",
765
      "User: What is the meaning of life?\n",
766
      "AI: 42\n",
767
      "\n",
768
      "\n",
769
      "User: What is the weather like today?\n",
770
      "AI: Cloudy with a chance of memes.\n",
771
      "\n",
772
      "\n",
773
      "User: What type of artificial intelligence do you use to handle complex tasks?\n",
774
      "AI: I use a combination of cutting-edge neural networks, fuzzy logic, and a pinch of magic.\n",
775
      "\n",
776
      "\n",
777
      "User: If I am in America, and I want to call someone in another country, I'm\n",
778
      "thinking maybe Europe, possibly western Europe like France, Germany, or the UK,\n",
779
      "what is the best way to do that?\n",
780
      "AI: \n"
781
     ]
782
    }
783
   ],
784
   "source": [
785
    "example_selector = LengthBasedExampleSelector(\n",
786
    "    examples=examples,\n",
787
    "    example_prompt=example_prompt,\n",
788
    "    max_length=100  # increased max length\n",
789
    ")\n",
790
    "\n",
791
    "# now create the few shot prompt template\n",
792
    "dynamic_prompt_template = FewShotPromptTemplate(\n",
793
    "    example_selector=example_selector,  # use example_selector instead of examples\n",
794
    "    example_prompt=example_prompt,\n",
795
    "    prefix=prefix,\n",
796
    "    suffix=suffix,\n",
797
    "    input_variables=[\"query\"],\n",
798
    "    example_separator=\"\\n\"\n",
799
    ")\n",
800
    "\n",
801
    "print(dynamic_prompt_template.format(query=query))"
802
   ]
803
  },
804
  {
805
   "attachments": {},
806
   "cell_type": "markdown",
807
   "metadata": {},
808
   "source": [
809
    "These are just a few of the prompt tooling available in LangChain. For example, there is actually an entire other set of example selectors beyond the `LengthBasedExampleSelector`. We'll cover them in detail in upcoming notebooks, or you can read about them in the [LangChain docs](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html)."
810
   ]
811
  }
812
 ],
813
 "metadata": {
814
  "kernelspec": {
815
   "display_name": "ml",
816
   "language": "python",
817
   "name": "python3"
818
  },
819
  "language_info": {
820
   "codemirror_mode": {
821
    "name": "ipython",
822
    "version": 3
823
   },
824
   "file_extension": ".py",
825
   "mimetype": "text/x-python",
826
   "name": "python",
827
   "nbconvert_exporter": "python",
828
   "pygments_lexer": "ipython3",
829
   "version": "3.9.12"
830
  },
831
  "orig_nbformat": 4,
832
  "vscode": {
833
   "interpreter": {
834
    "hash": "b8e7999f96e1b425e2d542f21b571f5a4be3e97158b0b46ea1b2500df63956ce"
835
   }
836
  }
837
 },
838
 "nbformat": 4,
839
 "nbformat_minor": 2
840
}
841

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.