examples

Форк
0
/
02-langchain-chains.ipynb 
933 строки · 28.8 Кб
1
{
2
  "cells": [
3
    {
4
      "cell_type": "code",
5
      "execution_count": null,
6
      "id": "uZR3iGJJtdDE",
7
      "metadata": {
8
        "colab": {
9
          "base_uri": "https://localhost:8080/"
10
        },
11
        "id": "uZR3iGJJtdDE",
12
        "outputId": "9ebb9f66-add2-4567-e37e-05323073e26b"
13
      },
14
      "outputs": [],
15
      "source": [
16
        "!pip install -qU langchain openai   "
17
      ]
18
    },
19
    {
20
      "attachments": {},
21
      "cell_type": "markdown",
22
      "id": "7a4ba72d",
23
      "metadata": {},
24
      "source": [
25
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/02-langchain-chains.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/02-langchain-chains.ipynb)\n",
26
        "\n",
27
        "#### [LangChain Handbook](https://github.com/pinecone-io/examples/tree/master/generation/langchain/handbook)\n",
28
        "\n",
29
        "# Getting Started with Chains\n",
30
        "\n",
31
        "Chains are the core of LangChain. They are simply a chain of components, executed in a particular order.\n",
32
        "\n",
33
        "The simplest of these chains is the `LLMChain`. It works by taking a user's input, passing in to the first element in the chain — a `PromptTemplate` — to format the input into a particular prompt. The formatted prompt is then passed to the next (and final) element in the chain — a LLM.\n",
34
        "\n",
35
        "We'll start by importing all the libraries that we'll be using in this example."
36
      ]
37
    },
38
    {
39
      "cell_type": "code",
40
      "execution_count": null,
41
      "id": "66fb9c2a",
42
      "metadata": {
43
        "id": "66fb9c2a"
44
      },
45
      "outputs": [],
46
      "source": [
47
        "import inspect\n",
48
        "import re\n",
49
        "\n",
50
        "from getpass import getpass\n",
51
        "from langchain import OpenAI, PromptTemplate\n",
52
        "from langchain.chains import LLMChain, LLMMathChain, TransformChain, SequentialChain\n",
53
        "from langchain.callbacks import get_openai_callback"
54
      ]
55
    },
56
    {
57
      "cell_type": "markdown",
58
      "id": "wPdWz1IdxyBR",
59
      "metadata": {
60
        "id": "wPdWz1IdxyBR"
61
      },
62
      "source": [
63
        "To run this notebook, we will need to use an OpenAI LLM. Here we will setup the LLM we will use for the whole notebook, just input your openai api key when prompted. "
64
      ]
65
    },
66
    {
67
      "cell_type": "code",
68
      "execution_count": null,
69
      "id": "v86cmyppxdfc",
70
      "metadata": {
71
        "colab": {
72
          "base_uri": "https://localhost:8080/"
73
        },
74
        "id": "v86cmyppxdfc",
75
        "outputId": "a897108c-be81-49b8-8802-d71741243e43"
76
      },
77
      "outputs": [],
78
      "source": [
79
        "OPENAI_API_KEY = getpass()"
80
      ]
81
    },
82
    {
83
      "cell_type": "code",
84
      "execution_count": null,
85
      "id": "baaa74b8",
86
      "metadata": {},
87
      "outputs": [],
88
      "source": [
89
        "llm = OpenAI(\n",
90
        "    temperature=0, \n",
91
        "    openai_api_key=OPENAI_API_KEY\n",
92
        "    )"
93
      ]
94
    },
95
    {
96
      "cell_type": "markdown",
97
      "id": "309g_2pqxzzB",
98
      "metadata": {
99
        "id": "309g_2pqxzzB"
100
      },
101
      "source": [
102
        "An extra utility we will use is this function that will tell us how many tokens we are using in each call. This is a good practice that is increasingly important as we use more complex tools that might make several calls to the API (like agents). It is very important to have a close control of how many tokens we are spending to avoid unsuspected expenditures."
103
      ]
104
    },
105
    {
106
      "cell_type": "code",
107
      "execution_count": null,
108
      "id": "DsC3szr6yP3L",
109
      "metadata": {
110
        "id": "DsC3szr6yP3L"
111
      },
112
      "outputs": [],
113
      "source": [
114
        "def count_tokens(chain, query):\n",
115
        "    with get_openai_callback() as cb:\n",
116
        "        result = chain.run(query)\n",
117
        "        print(f'Spent a total of {cb.total_tokens} tokens')\n",
118
        "\n",
119
        "    return result"
120
      ]
121
    },
122
    {
123
      "attachments": {},
124
      "cell_type": "markdown",
125
      "id": "6e1f31b4",
126
      "metadata": {
127
        "id": "6e1f31b4"
128
      },
129
      "source": [
130
        "## What are chains anyway?"
131
      ]
132
    },
133
    {
134
      "attachments": {},
135
      "cell_type": "markdown",
136
      "id": "5b919c3a",
137
      "metadata": {
138
        "id": "5b919c3a"
139
      },
140
      "source": [
141
        "**Definition**: Chains are one of the fundamental building blocks of this lib (as you can guess!).\n",
142
        "\n",
143
        "The official definition of chains is the following:\n",
144
        "\n",
145
        "\n",
146
        "> A chain is made up of links, which can be either primitives or other chains. Primitives can be either prompts, llms, utils, or other chains.\n",
147
        "\n",
148
        "\n",
149
        "So a chain is basically a pipeline that processes an input by using a specific combination of primitives. Intuitively, it can be thought of as a 'step' that performs a certain set of operations on an input and returns the result. They can be anything from a prompt-based pass through a LLM to applying a Python function to an text."
150
      ]
151
    },
152
    {
153
      "cell_type": "markdown",
154
      "id": "c4644b2f",
155
      "metadata": {
156
        "id": "c4644b2f"
157
      },
158
      "source": [
159
        "Chains are divided in three types: Utility chains, Generic chains and Combine Documents chains. In this edition, we will focus on the first two since the third is too specific (will be covered in due course).\n",
160
        "\n",
161
        "1. Utility Chains: chains that are usually used to extract a specific answer from a llm with a very narrow purpose and are ready to be used out of the box.\n",
162
        "2. Generic Chains: chains that are used as building blocks for other chains but cannot be used out of the box on their own."
163
      ]
164
    },
165
    {
166
      "cell_type": "markdown",
167
      "id": "e4d283b6",
168
      "metadata": {
169
        "id": "e4d283b6"
170
      },
171
      "source": [
172
        "Let's take a peek into what these chains have to offer!"
173
      ]
174
    },
175
    {
176
      "cell_type": "markdown",
177
      "id": "831827b7",
178
      "metadata": {
179
        "id": "831827b7"
180
      },
181
      "source": [
182
        "### Utility Chains"
183
      ]
184
    },
185
    {
186
      "cell_type": "markdown",
187
      "id": "6c66e4b4",
188
      "metadata": {
189
        "id": "6c66e4b4"
190
      },
191
      "source": [
192
        "Let's start with a simple utility chain. The `LLMMathChain` gives llms the ability to do math. Let's see how it works!"
193
      ]
194
    },
195
    {
196
      "cell_type": "markdown",
197
      "id": "HF3XCWD2sVi0",
198
      "metadata": {
199
        "id": "HF3XCWD2sVi0"
200
      },
201
      "source": [
202
        "#### Pro-tip: use `verbose=True` to see what the different steps in the chain are!"
203
      ]
204
    },
205
    {
206
      "cell_type": "code",
207
      "execution_count": 33,
208
      "id": "b4161561",
209
      "metadata": {
210
        "colab": {
211
          "base_uri": "https://localhost:8080/"
212
        },
213
        "id": "b4161561",
214
        "outputId": "16486831-80a5-41df-906e-376575b7f514"
215
      },
216
      "outputs": [
217
        {
218
          "name": "stdout",
219
          "output_type": "stream",
220
          "text": [
221
            "\n",
222
            "\n",
223
            "\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
224
            "What is 13 raised to the .3432 power?\u001b[32;1m\u001b[1;3m\n",
225
            "```python\n",
226
            "import math\n",
227
            "print(math.pow(13, .3432))\n",
228
            "```\n",
229
            "\u001b[0m\n",
230
            "Answer: \u001b[33;1m\u001b[1;3m2.4116004626599237\n",
231
            "\u001b[0m\n",
232
            "\u001b[1m> Finished chain.\u001b[0m\n",
233
            "Spent a total of 272 tokens\n"
234
          ]
235
        },
236
        {
237
          "data": {
238
            "text/plain": [
239
              "'Answer: 2.4116004626599237\\n'"
240
            ]
241
          },
242
          "execution_count": 33,
243
          "metadata": {},
244
          "output_type": "execute_result"
245
        }
246
      ],
247
      "source": [
248
        "llm_math = LLMMathChain(llm=llm, verbose=True)\n",
249
        "\n",
250
        "count_tokens(llm_math, \"What is 13 raised to the .3432 power?\")"
251
      ]
252
    },
253
    {
254
      "cell_type": "markdown",
255
      "id": "198eebb2",
256
      "metadata": {
257
        "id": "198eebb2"
258
      },
259
      "source": [
260
        "Let's see what is going on here. The chain recieved a question in natural language and sent it to the llm. The llm returned a Python code which the chain compiled to give us an answer. A few questions arise.. How did the llm know that we wanted it to return Python code? "
261
      ]
262
    },
263
    {
264
      "attachments": {},
265
      "cell_type": "markdown",
266
      "id": "a7a0821a",
267
      "metadata": {
268
        "id": "a7a0821a"
269
      },
270
      "source": [
271
        "**Enter prompts**"
272
      ]
273
    },
274
    {
275
      "cell_type": "markdown",
276
      "id": "c86c5798",
277
      "metadata": {
278
        "id": "c86c5798"
279
      },
280
      "source": [
281
        "The question we send as input to the chain is not the only input that the llm recieves 😉. The input is inserted into a wider context, which gives precise instructions on how to interpret the input we send. This is called a _prompt_. Let's see what this chain's prompt is!"
282
      ]
283
    },
284
    {
285
      "cell_type": "code",
286
      "execution_count": 34,
287
      "id": "62778ef4",
288
      "metadata": {
289
        "id": "62778ef4",
290
        "outputId": "211670a8-db56-4f68-d873-97d279870890"
291
      },
292
      "outputs": [
293
        {
294
          "name": "stdout",
295
          "output_type": "stream",
296
          "text": [
297
            "You are GPT-3, and you can't do math.\n",
298
            "\n",
299
            "You can do basic math, and your memorization abilities are impressive, but you can't do any complex calculations that a human could not do in their head. You also have an annoying tendency to just make up highly specific, but wrong, answers.\n",
300
            "\n",
301
            "So we hooked you up to a Python 3 kernel, and now you can execute code. If anyone gives you a hard math problem, just use this format and we’ll take care of the rest:\n",
302
            "\n",
303
            "Question: ${{Question with hard calculation.}}\n",
304
            "```python\n",
305
            "${{Code that prints what you need to know}}\n",
306
            "```\n",
307
            "```output\n",
308
            "${{Output of your code}}\n",
309
            "```\n",
310
            "Answer: ${{Answer}}\n",
311
            "\n",
312
            "Otherwise, use this simpler format:\n",
313
            "\n",
314
            "Question: ${{Question without hard calculation}}\n",
315
            "Answer: ${{Answer}}\n",
316
            "\n",
317
            "Begin.\n",
318
            "\n",
319
            "Question: What is 37593 * 67?\n",
320
            "\n",
321
            "```python\n",
322
            "print(37593 * 67)\n",
323
            "```\n",
324
            "```output\n",
325
            "2518731\n",
326
            "```\n",
327
            "Answer: 2518731\n",
328
            "\n",
329
            "Question: {question}\n",
330
            "\n"
331
          ]
332
        }
333
      ],
334
      "source": [
335
        "print(llm_math.prompt.template)"
336
      ]
337
    },
338
    {
339
      "cell_type": "markdown",
340
      "id": "708031d8",
341
      "metadata": {
342
        "id": "708031d8"
343
      },
344
      "source": [
345
        "Ok.. let's see what we got here. So, we are literally telling the llm that for complex math problems **it should not try to do math on its own** but rather it should print a Python code that will calculate the math problem instead. Probably, if we just sent the query without any context, the llm would try (and fail) to calculate this on its own. Wait! This is testable.. let's try it out! 🧐"
346
      ]
347
    },
348
    {
349
      "cell_type": "code",
350
      "execution_count": 35,
351
      "id": "66b92768",
352
      "metadata": {
353
        "id": "66b92768",
354
        "outputId": "6c9b7f59-529d-409e-8562-5a622f326473"
355
      },
356
      "outputs": [
357
        {
358
          "name": "stdout",
359
          "output_type": "stream",
360
          "text": [
361
            "Spent a total of 17 tokens\n"
362
          ]
363
        },
364
        {
365
          "data": {
366
            "text/plain": [
367
              "'\\n\\n2.907'"
368
            ]
369
          },
370
          "execution_count": 35,
371
          "metadata": {},
372
          "output_type": "execute_result"
373
        }
374
      ],
375
      "source": [
376
        "# we set the prompt to only have the question we ask\n",
377
        "prompt = PromptTemplate(input_variables=['question'], template='{question}')\n",
378
        "llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
379
        "\n",
380
        "# we ask the llm for the answer with no context\n",
381
        "\n",
382
        "count_tokens(llm_chain, \"What is 13 raised to the .3432 power?\")"
383
      ]
384
    },
385
    {
386
      "cell_type": "markdown",
387
      "id": "d147e7bf",
388
      "metadata": {
389
        "id": "d147e7bf"
390
      },
391
      "source": [
392
        "Wrong answer! Herein lies the power of prompting and one of our most important insights so far: \n",
393
        "\n",
394
        "**Insight**: _by using prompts intelligently, we can force the llm to avoid common pitfalls by explicitly and purposefully programming it to behave in a certain way._"
395
      ]
396
    },
397
    {
398
      "cell_type": "markdown",
399
      "id": "1cd2a31f",
400
      "metadata": {
401
        "id": "1cd2a31f"
402
      },
403
      "source": [
404
        "Another interesting point about this chain is that it not only runs an input through the llm but it later compiles Python code. Let's see exactly how this works."
405
      ]
406
    },
407
    {
408
      "cell_type": "code",
409
      "execution_count": 36,
410
      "id": "3488c5b6",
411
      "metadata": {
412
        "id": "3488c5b6",
413
        "outputId": "8b32a998-8e11-48bf-f21c-f5d086186508"
414
      },
415
      "outputs": [
416
        {
417
          "name": "stdout",
418
          "output_type": "stream",
419
          "text": [
420
            "    def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:\n",
421
            "        llm_executor = LLMChain(prompt=self.prompt, llm=self.llm)\n",
422
            "        python_executor = PythonREPL()\n",
423
            "        self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose)\n",
424
            "        t = llm_executor.predict(question=inputs[self.input_key], stop=[\"```output\"])\n",
425
            "        self.callback_manager.on_text(t, color=\"green\", verbose=self.verbose)\n",
426
            "        t = t.strip()\n",
427
            "        if t.startswith(\"```python\"):\n",
428
            "            code = t[9:-4]\n",
429
            "            output = python_executor.run(code)\n",
430
            "            self.callback_manager.on_text(\"\\nAnswer: \", verbose=self.verbose)\n",
431
            "            self.callback_manager.on_text(output, color=\"yellow\", verbose=self.verbose)\n",
432
            "            answer = \"Answer: \" + output\n",
433
            "        elif t.startswith(\"Answer:\"):\n",
434
            "            answer = t\n",
435
            "        else:\n",
436
            "            raise ValueError(f\"unknown format from LLM: {t}\")\n",
437
            "        return {self.output_key: answer}\n",
438
            "\n"
439
          ]
440
        }
441
      ],
442
      "source": [
443
        "print(inspect.getsource(llm_math._call))"
444
      ]
445
    },
446
    {
447
      "cell_type": "markdown",
448
      "id": "fa6b6c2e",
449
      "metadata": {
450
        "id": "fa6b6c2e"
451
      },
452
      "source": [
453
        "So we can see here that if the llm returns Python code we will compile it with a Python REPL* simulator. We now have the full picture of the chain: either the llm returns an answer (for simple math problems) or it returns Python code which we compile for an exact answer to harder problems. Smart!"
454
      ]
455
    },
456
    {
457
      "attachments": {},
458
      "cell_type": "markdown",
459
      "id": "67f96bd3",
460
      "metadata": {
461
        "id": "67f96bd3"
462
      },
463
      "source": [
464
        "Also notice that here we get our first example of **chain composition**, a key concept behind what makes langchain special. We are using the `LLMMathChain` which in turn initializes and uses an `LLMChain` (a 'Generic Chain') when called. We can make any arbitrary number of such compositions, effectively 'chaining' many such chains to achieve highly complex and customizable behaviour."
465
      ]
466
    },
467
    {
468
      "cell_type": "markdown",
469
      "id": "b109619a",
470
      "metadata": {
471
        "id": "b109619a"
472
      },
473
      "source": [
474
        "Utility chains usually follow this same basic structure: there is a prompt for constraining the llm to return a very specific type of response from a given query. We can ask the llm to create SQL queries, API calls and even create Bash commands on the fly 🔥\n",
475
        "\n",
476
        "The list continues to grow as langchain becomes more and more flexible and powerful so we encourage you to [check it out](https://langchain.readthedocs.io/en/latest/modules/chains/utility_how_to.html) and tinker with the example notebooks that you might find interesting."
477
      ]
478
    },
479
    {
480
      "cell_type": "markdown",
481
      "id": "381e329c",
482
      "metadata": {
483
        "id": "381e329c"
484
      },
485
      "source": [
486
        "*_A Python REPL (Read-Eval-Print Loop) is an interactive shell for executing Python code line by line_"
487
      ]
488
    },
489
    {
490
      "attachments": {},
491
      "cell_type": "markdown",
492
      "id": "f66a25a2",
493
      "metadata": {
494
        "id": "f66a25a2"
495
      },
496
      "source": [
497
        "### Generic chains"
498
      ]
499
    },
500
    {
501
      "attachments": {},
502
      "cell_type": "markdown",
503
      "id": "70b32a84",
504
      "metadata": {
505
        "id": "70b32a84"
506
      },
507
      "source": [
508
        "There are only three Generic Chains in langchain and we will go all in to showcase them all in the same example. Let's go!"
509
      ]
510
    },
511
    {
512
      "attachments": {},
513
      "cell_type": "markdown",
514
      "id": "4b8e2048",
515
      "metadata": {
516
        "id": "4b8e2048"
517
      },
518
      "source": [
519
        "Say we have had experience of getting dirty input texts. Specifically, as we know, llms charge us by the number of tokens we use and we are not happy to pay extra when the input has extra characters. Plus its not neat 😉"
520
      ]
521
    },
522
    {
523
      "cell_type": "markdown",
524
      "id": "a6e778d2",
525
      "metadata": {
526
        "id": "a6e778d2"
527
      },
528
      "source": [
529
        "First, we will build a custom transform function to clean the spacing of our texts. We will then use this function to build a chain where we input our text and we expect a clean text as output."
530
      ]
531
    },
532
    {
533
      "cell_type": "code",
534
      "execution_count": 37,
535
      "id": "c794e00a",
536
      "metadata": {
537
        "id": "c794e00a"
538
      },
539
      "outputs": [],
540
      "source": [
541
        "def transform_func(inputs: dict) -> dict:\n",
542
        "    text = inputs[\"text\"]\n",
543
        "    \n",
544
        "    # replace multiple new lines and multiple spaces with a single one\n",
545
        "    text = re.sub(r'(\\r\\n|\\r|\\n){2,}', r'\\n', text)\n",
546
        "    text = re.sub(r'[ \\t]+', ' ', text)\n",
547
        "\n",
548
        "    return {\"output_text\": text}"
549
      ]
550
    },
551
    {
552
      "cell_type": "markdown",
553
      "id": "42dc1ac6",
554
      "metadata": {
555
        "id": "42dc1ac6"
556
      },
557
      "source": [
558
        "Importantly, when we initialize the chain we do not send an llm as an argument. As you can imagine, not having an llm makes this chain's abilities much weaker than the example we saw earlier. However, as we will see next, combining this chain with other chains can give us highly desirable results."
559
      ]
560
    },
561
    {
562
      "cell_type": "code",
563
      "execution_count": 38,
564
      "id": "286f7295",
565
      "metadata": {
566
        "id": "286f7295"
567
      },
568
      "outputs": [],
569
      "source": [
570
        "clean_extra_spaces_chain = TransformChain(input_variables=[\"text\"], output_variables=[\"output_text\"], transform=transform_func)"
571
      ]
572
    },
573
    {
574
      "cell_type": "code",
575
      "execution_count": 39,
576
      "id": "977bf11a",
577
      "metadata": {
578
        "id": "977bf11a",
579
        "outputId": "8d6eaa5e-b417-4c17-a345-7b7e32071430"
580
      },
581
      "outputs": [
582
        {
583
          "data": {
584
            "text/plain": [
585
              "'A random text with some irregular spacing.\\n Another one here as well.'"
586
            ]
587
          },
588
          "execution_count": 39,
589
          "metadata": {},
590
          "output_type": "execute_result"
591
        }
592
      ],
593
      "source": [
594
        "clean_extra_spaces_chain.run('A random text  with   some irregular spacing.\\n\\n\\n     Another one   here as well.')"
595
      ]
596
    },
597
    {
598
      "cell_type": "markdown",
599
      "id": "b3f84cd0",
600
      "metadata": {
601
        "id": "b3f84cd0"
602
      },
603
      "source": [
604
        "Great! Now things will get interesting.\n",
605
        "\n",
606
        "Say we want to use our chain to clean an input text and then paraphrase the input in a specific style, say a poet or a policeman. As we now know, the `TransformChain` does not use a llm so the styling will have to be done elsewhere. That's where our `LLMChain` comes in. We know about this chain already and we know that we can do cool things with smart prompting so let's take a chance!"
607
      ]
608
    },
609
    {
610
      "cell_type": "markdown",
611
      "id": "5b77042a",
612
      "metadata": {
613
        "id": "5b77042a"
614
      },
615
      "source": [
616
        "First we will build the prompt template:"
617
      ]
618
    },
619
    {
620
      "cell_type": "code",
621
      "execution_count": 40,
622
      "id": "73719a5d",
623
      "metadata": {
624
        "id": "73719a5d"
625
      },
626
      "outputs": [],
627
      "source": [
628
        "template = \"\"\"Paraphrase this text:\n",
629
        "\n",
630
        "{output_text}\n",
631
        "\n",
632
        "In the style of a {style}.\n",
633
        "\n",
634
        "Paraphrase: \"\"\"\n",
635
        "prompt = PromptTemplate(input_variables=[\"style\", \"output_text\"], template=template)"
636
      ]
637
    },
638
    {
639
      "cell_type": "markdown",
640
      "id": "83b2ec83",
641
      "metadata": {
642
        "id": "83b2ec83"
643
      },
644
      "source": [
645
        "And next, initialize our chain:"
646
      ]
647
    },
648
    {
649
      "cell_type": "code",
650
      "execution_count": 41,
651
      "id": "48a067ab",
652
      "metadata": {
653
        "id": "48a067ab"
654
      },
655
      "outputs": [],
656
      "source": [
657
        "style_paraphrase_chain = LLMChain(llm=llm, prompt=prompt, output_key='final_output')"
658
      ]
659
    },
660
    {
661
      "attachments": {},
662
      "cell_type": "markdown",
663
      "id": "2324005d",
664
      "metadata": {
665
        "id": "2324005d"
666
      },
667
      "source": [
668
        "Great! Notice that the input text in the template is called 'output_text'. Can you guess why?\n",
669
        "\n",
670
        "We are going to pass the output of the `TransformChain` to the `LLMChain`!"
671
      ]
672
    },
673
    {
674
      "cell_type": "markdown",
675
      "id": "c5da4925",
676
      "metadata": {
677
        "id": "c5da4925"
678
      },
679
      "source": [
680
        "Finally, we need to combine them both to work as one integrated chain. For that we will use `SequentialChain` which is our third generic chain building block."
681
      ]
682
    },
683
    {
684
      "cell_type": "code",
685
      "execution_count": 42,
686
      "id": "06f51f17",
687
      "metadata": {
688
        "id": "06f51f17"
689
      },
690
      "outputs": [],
691
      "source": [
692
        "sequential_chain = SequentialChain(chains=[clean_extra_spaces_chain, style_paraphrase_chain], input_variables=['text', 'style'], output_variables=['final_output'])"
693
      ]
694
    },
695
    {
696
      "cell_type": "markdown",
697
      "id": "7f0f51d8",
698
      "metadata": {
699
        "id": "7f0f51d8"
700
      },
701
      "source": [
702
        "Our input is the langchain docs description of what chains are but dirty with some extra spaces all around."
703
      ]
704
    },
705
    {
706
      "cell_type": "code",
707
      "execution_count": 43,
708
      "id": "a8032489",
709
      "metadata": {
710
        "id": "a8032489"
711
      },
712
      "outputs": [],
713
      "source": [
714
        "input_text = \"\"\"\n",
715
        "Chains allow us to combine multiple \n",
716
        "\n",
717
        "\n",
718
        "components together to create a single, coherent application. \n",
719
        "\n",
720
        "For example, we can create a chain that takes user input,       format it with a PromptTemplate, \n",
721
        "\n",
722
        "and then passes the formatted response to an LLM. We can build more complex chains by combining     multiple chains together, or by \n",
723
        "\n",
724
        "\n",
725
        "combining chains with other components.\n",
726
        "\"\"\""
727
      ]
728
    },
729
    {
730
      "cell_type": "markdown",
731
      "id": "b2f55d21",
732
      "metadata": {
733
        "id": "b2f55d21"
734
      },
735
      "source": [
736
        "We are all set. Time to get creative!"
737
      ]
738
    },
739
    {
740
      "cell_type": "code",
741
      "execution_count": 44,
742
      "id": "d507aa5c",
743
      "metadata": {},
744
      "outputs": [
745
        {
746
          "name": "stdout",
747
          "output_type": "stream",
748
          "text": [
749
            "Spent a total of 163 tokens\n"
750
          ]
751
        },
752
        {
753
          "data": {
754
            "text/plain": [
755
              "\"\\nChains let us link up multiple pieces to make one dope app. Like, we can take user input, style it up with a PromptTemplate, then pass it to an LLM. We can get even more creative by combining multiple chains or mixin' chains with other components.\""
756
            ]
757
          },
758
          "execution_count": 44,
759
          "metadata": {},
760
          "output_type": "execute_result"
761
        }
762
      ],
763
      "source": [
764
        "count_tokens(sequential_chain, {'text': input_text, 'style': 'a 90s rapper'})"
765
      ]
766
    },
767
    {
768
      "cell_type": "markdown",
769
      "id": "60b52e19",
770
      "metadata": {
771
        "id": "60b52e19"
772
      },
773
      "source": [
774
        "## A note on langchain-hub"
775
      ]
776
    },
777
    {
778
      "attachments": {},
779
      "cell_type": "markdown",
780
      "id": "02f649da",
781
      "metadata": {
782
        "id": "02f649da"
783
      },
784
      "source": [
785
        "`langchain-hub` is a sister library to `langchain`, where all the chains, agents and prompts are serialized for us to use."
786
      ]
787
    },
788
    {
789
      "cell_type": "code",
790
      "execution_count": 45,
791
      "id": "411500c2",
792
      "metadata": {
793
        "id": "411500c2"
794
      },
795
      "outputs": [],
796
      "source": [
797
        "from langchain.chains import load_chain"
798
      ]
799
    },
800
    {
801
      "cell_type": "markdown",
802
      "id": "b375e5b7",
803
      "metadata": {
804
        "id": "b375e5b7"
805
      },
806
      "source": [
807
        "Loading from langchain hub is as easy as finding the chain you want to load in the repository and then using `load_chain` with the corresponding path. We also have `load_prompt` and `initialize_agent`, but more on that later. Let's see how we can do this with our `LLMMathChain` we saw earlier:"
808
      ]
809
    },
810
    {
811
      "cell_type": "code",
812
      "execution_count": 51,
813
      "id": "fbe8748d",
814
      "metadata": {
815
        "id": "fbe8748d"
816
      },
817
      "outputs": [],
818
      "source": [
819
        "llm_math_chain = load_chain('lc://chains/llm-math/chain.json')"
820
      ]
821
    },
822
    {
823
      "cell_type": "markdown",
824
      "id": "ebcfe67c",
825
      "metadata": {
826
        "id": "ebcfe67c"
827
      },
828
      "source": [
829
        "What if we want to change some of the configuration parameters? We can simply override it after loading:"
830
      ]
831
    },
832
    {
833
      "cell_type": "code",
834
      "execution_count": 52,
835
      "id": "d0d54233",
836
      "metadata": {
837
        "id": "d0d54233",
838
        "outputId": "92eba1cf-e47b-4df1-cc51-caee5a6a720b"
839
      },
840
      "outputs": [
841
        {
842
          "data": {
843
            "text/plain": [
844
              "True"
845
            ]
846
          },
847
          "execution_count": 52,
848
          "metadata": {},
849
          "output_type": "execute_result"
850
        }
851
      ],
852
      "source": [
853
        "llm_math_chain.verbose"
854
      ]
855
    },
856
    {
857
      "cell_type": "code",
858
      "execution_count": 54,
859
      "id": "074f8806",
860
      "metadata": {
861
        "id": "074f8806"
862
      },
863
      "outputs": [],
864
      "source": [
865
        "llm_math_chain.verbose = False"
866
      ]
867
    },
868
    {
869
      "cell_type": "code",
870
      "execution_count": 49,
871
      "id": "465a6cbf",
872
      "metadata": {
873
        "id": "465a6cbf",
874
        "outputId": "0207bf08-0db0-4d85-e3b9-ac7922f344c4"
875
      },
876
      "outputs": [
877
        {
878
          "data": {
879
            "text/plain": [
880
              "False"
881
            ]
882
          },
883
          "execution_count": 49,
884
          "metadata": {},
885
          "output_type": "execute_result"
886
        }
887
      ],
888
      "source": [
889
        "llm_math_chain.verbose"
890
      ]
891
    },
892
    {
893
      "attachments": {},
894
      "cell_type": "markdown",
895
      "id": "2cc688ca",
896
      "metadata": {},
897
      "source": [
898
        "That's it for this example on chains.\n",
899
        "\n",
900
        "---"
901
      ]
902
    }
903
  ],
904
  "metadata": {
905
    "colab": {
906
      "provenance": []
907
    },
908
    "kernelspec": {
909
      "display_name": "ml",
910
      "language": "python",
911
      "name": "python3"
912
    },
913
    "language_info": {
914
      "codemirror_mode": {
915
        "name": "ipython",
916
        "version": 3
917
      },
918
      "file_extension": ".py",
919
      "mimetype": "text/x-python",
920
      "name": "python",
921
      "nbconvert_exporter": "python",
922
      "pygments_lexer": "ipython3",
923
      "version": "3.9.12"
924
    },
925
    "vscode": {
926
      "interpreter": {
927
        "hash": "b8e7999f96e1b425e2d542f21b571f5a4be3e97158b0b46ea1b2500df63956ce"
928
      }
929
    }
930
  },
931
  "nbformat": 4,
932
  "nbformat_minor": 5
933
}
934

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.