examples

Форк
0
/
06-langchain-agents.ipynb 
1252 строки · 50.5 Кб
1
{
2
  "cells": [
3
    {
4
      "cell_type": "markdown",
5
      "id": "49e9c69e",
6
      "metadata": {
7
        "id": "49e9c69e"
8
      },
9
      "source": [
10
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/06-langchain-agents.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/06-langchain-agents.ipynb)"
11
      ]
12
    },
13
    {
14
      "cell_type": "markdown",
15
      "id": "1571b632",
16
      "metadata": {
17
        "id": "1571b632"
18
      },
19
      "source": [
20
        "#### [LangChain Handbook](https://pinecone.io/learn/langchain)\n",
21
        "\n",
22
        "# Agents 🤖\n",
23
        "\n",
24
        "Agents are like \"tools\" for LLMs. They allow a LLM to access Google search, perform complex calculations with Python, and even make SQL queries.\n",
25
        "\n",
26
        "In this notebook we'll explore agents and how to use them in LangChain.\n",
27
        "\n",
28
        "We'll start by installing the prerequisite libraries that we'll be using in this example."
29
      ]
30
    },
31
    {
32
      "cell_type": "code",
33
      "execution_count": 1,
34
      "id": "uZR3iGJJtdDE",
35
      "metadata": {
36
        "colab": {
37
          "base_uri": "https://localhost:8080/"
38
        },
39
        "id": "uZR3iGJJtdDE",
40
        "outputId": "0119ff4e-61fc-4d5f-9292-a719c54b8cb5"
41
      },
42
      "outputs": [
43
        {
44
          "output_type": "stream",
45
          "name": "stdout",
46
          "text": [
47
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.5/1.5 MB\u001b[0m \u001b[31m10.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
48
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.5/75.5 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
49
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
50
            "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
51
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
52
            "\u001b[?25h  Building wheel for google-search-results (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
53
            "  Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
54
          ]
55
        }
56
      ],
57
      "source": [
58
        "!pip install -qU langchain openai google-search-results wikipedia sqlalchemy"
59
      ]
60
    },
61
    {
62
      "cell_type": "markdown",
63
      "id": "wPdWz1IdxyBR",
64
      "metadata": {
65
        "id": "wPdWz1IdxyBR"
66
      },
67
      "source": [
68
        "To run this notebook, we will need to use an OpenAI LLM. Here we will setup the LLM we will use for the whole notebook, just input your openai api key when prompted."
69
      ]
70
    },
71
    {
72
      "cell_type": "code",
73
      "execution_count": 2,
74
      "id": "c02c4fa2",
75
      "metadata": {
76
        "colab": {
77
          "base_uri": "https://localhost:8080/"
78
        },
79
        "id": "c02c4fa2",
80
        "outputId": "ad22dddc-6180-48ac-86dd-265b6b7afa57"
81
      },
82
      "outputs": [
83
        {
84
          "name": "stdout",
85
          "output_type": "stream",
86
          "text": [
87
            "··········\n"
88
          ]
89
        }
90
      ],
91
      "source": [
92
        "from getpass import getpass\n",
93
        "\n",
94
        "OPENAI_API_KEY = getpass()"
95
      ]
96
    },
97
    {
98
      "cell_type": "code",
99
      "execution_count": 3,
100
      "id": "73bfcbb6",
101
      "metadata": {
102
        "id": "73bfcbb6"
103
      },
104
      "outputs": [],
105
      "source": [
106
        "from langchain import OpenAI\n",
107
        "\n",
108
        "llm = OpenAI(\n",
109
        "    openai_api_key=OPENAI_API_KEY,\n",
110
        "    temperature=0\n",
111
        ")"
112
      ]
113
    },
114
    {
115
      "cell_type": "markdown",
116
      "id": "309g_2pqxzzB",
117
      "metadata": {
118
        "id": "309g_2pqxzzB"
119
      },
120
      "source": [
121
        "As we did before, we will be counting our tokens in each call."
122
      ]
123
    },
124
    {
125
      "cell_type": "code",
126
      "execution_count": 4,
127
      "id": "DsC3szr6yP3L",
128
      "metadata": {
129
        "id": "DsC3szr6yP3L"
130
      },
131
      "outputs": [],
132
      "source": [
133
        "from langchain.callbacks import get_openai_callback\n",
134
        "\n",
135
        "def count_tokens(agent, query):\n",
136
        "    with get_openai_callback() as cb:\n",
137
        "        result = agent(query)\n",
138
        "        print(f'Spent a total of {cb.total_tokens} tokens')\n",
139
        "\n",
140
        "    return result"
141
      ]
142
    },
143
    {
144
      "cell_type": "markdown",
145
      "id": "bd60a97c",
146
      "metadata": {
147
        "id": "bd60a97c"
148
      },
149
      "source": [
150
        "With all of that set up, let's jump into **Agents**."
151
      ]
152
    },
153
    {
154
      "cell_type": "markdown",
155
      "id": "6e1f31b4",
156
      "metadata": {
157
        "id": "6e1f31b4"
158
      },
159
      "source": [
160
        "## What is an agent?"
161
      ]
162
    },
163
    {
164
      "cell_type": "markdown",
165
      "id": "5b919c3a",
166
      "metadata": {
167
        "id": "5b919c3a"
168
      },
169
      "source": [
170
        "**Definition**: The key behind agents is giving LLM's the possibility of using tools in their workflow. This is where langchain departs from the popular chatgpt implementation and we can start to get a glimpse of what it offers us as builders. Until now, we covered several building blocks in isolation. Let's see them come to life.\n",
171
        "\n",
172
        "The official definition of agents is the following:\n",
173
        "\n",
174
        "\n",
175
        "> Agents use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user."
176
      ]
177
    },
178
    {
179
      "cell_type": "markdown",
180
      "id": "6c9c13e9",
181
      "metadata": {
182
        "id": "6c9c13e9"
183
      },
184
      "source": [
185
        "In this edition we will cover what we may call 'generic' agents which really able to perform many meta tasks. There are other more specific agents that are tuned for different tasks (called 'agent-toolkits'), but we will cover those in a future edition."
186
      ]
187
    },
188
    {
189
      "cell_type": "markdown",
190
      "id": "a93dc920",
191
      "metadata": {
192
        "id": "a93dc920"
193
      },
194
      "source": [
195
        "## Create database"
196
      ]
197
    },
198
    {
199
      "cell_type": "markdown",
200
      "id": "b601f903",
201
      "metadata": {
202
        "id": "b601f903"
203
      },
204
      "source": [
205
        "We will use the agents to interact with a small sample database of stocks. We will not dive into the details because this is just a dummy tool we will build for illustrative purposes. Let's create it."
206
      ]
207
    },
208
    {
209
      "cell_type": "code",
210
      "execution_count": 5,
211
      "id": "61b1f17c",
212
      "metadata": {
213
        "id": "61b1f17c"
214
      },
215
      "outputs": [],
216
      "source": [
217
        "from sqlalchemy import MetaData\n",
218
        "\n",
219
        "metadata_obj = MetaData()"
220
      ]
221
    },
222
    {
223
      "cell_type": "code",
224
      "execution_count": 6,
225
      "id": "3cc1d80e",
226
      "metadata": {
227
        "id": "3cc1d80e"
228
      },
229
      "outputs": [],
230
      "source": [
231
        "from sqlalchemy import Column, Integer, String, Table, Date, Float\n",
232
        "\n",
233
        "stocks = Table(\n",
234
        "    \"stocks\",\n",
235
        "    metadata_obj,\n",
236
        "    Column(\"obs_id\", Integer, primary_key=True),\n",
237
        "    Column(\"stock_ticker\", String(4), nullable=False),\n",
238
        "    Column(\"price\", Float, nullable=False),\n",
239
        "    Column(\"date\", Date, nullable=False),\n",
240
        ")"
241
      ]
242
    },
243
    {
244
      "cell_type": "code",
245
      "execution_count": 7,
246
      "id": "c9a9571a",
247
      "metadata": {
248
        "id": "c9a9571a"
249
      },
250
      "outputs": [],
251
      "source": [
252
        "from sqlalchemy import create_engine\n",
253
        "\n",
254
        "engine = create_engine(\"sqlite:///:memory:\")\n",
255
        "metadata_obj.create_all(engine)"
256
      ]
257
    },
258
    {
259
      "cell_type": "code",
260
      "execution_count": 8,
261
      "id": "81c3081f",
262
      "metadata": {
263
        "id": "81c3081f"
264
      },
265
      "outputs": [],
266
      "source": [
267
        "from datetime import datetime\n",
268
        "\n",
269
        "observations = [\n",
270
        "    [1, 'ABC', 200, datetime(2023, 1, 1)],\n",
271
        "    [2, 'ABC', 208, datetime(2023, 1, 2)],\n",
272
        "    [3, 'ABC', 232, datetime(2023, 1, 3)],\n",
273
        "    [4, 'ABC', 225, datetime(2023, 1, 4)],\n",
274
        "    [5, 'ABC', 226, datetime(2023, 1, 5)],\n",
275
        "    [6, 'XYZ', 810, datetime(2023, 1, 1)],\n",
276
        "    [7, 'XYZ', 803, datetime(2023, 1, 2)],\n",
277
        "    [8, 'XYZ', 798, datetime(2023, 1, 3)],\n",
278
        "    [9, 'XYZ', 795, datetime(2023, 1, 4)],\n",
279
        "    [10, 'XYZ', 791, datetime(2023, 1, 5)],\n",
280
        "]"
281
      ]
282
    },
283
    {
284
      "cell_type": "code",
285
      "execution_count": 9,
286
      "id": "85fd20fa",
287
      "metadata": {
288
        "id": "85fd20fa"
289
      },
290
      "outputs": [],
291
      "source": [
292
        "from sqlalchemy import insert\n",
293
        "\n",
294
        "def insert_obs(obs):\n",
295
        "    stmt = insert(stocks).values(\n",
296
        "    obs_id=obs[0],\n",
297
        "    stock_ticker=obs[1],\n",
298
        "    price=obs[2],\n",
299
        "    date=obs[3]\n",
300
        "    )\n",
301
        "\n",
302
        "    with engine.begin() as conn:\n",
303
        "        conn.execute(stmt)"
304
      ]
305
    },
306
    {
307
      "cell_type": "code",
308
      "execution_count": 10,
309
      "id": "6766f1f7",
310
      "metadata": {
311
        "id": "6766f1f7"
312
      },
313
      "outputs": [],
314
      "source": [
315
        "for obs in observations:\n",
316
        "    insert_obs(obs)"
317
      ]
318
    },
319
    {
320
      "cell_type": "markdown",
321
      "source": [
322
        "We are installing the `langchain_experimental` library here, since the `SQLDatabaseChain` is located there. This might be changed in the future and moved into official `langchain` library."
323
      ],
324
      "metadata": {
325
        "id": "H3_tIYndeNAt"
326
      },
327
      "id": "H3_tIYndeNAt"
328
    },
329
    {
330
      "cell_type": "code",
331
      "source": [
332
        "!pip install langchain_experimental -qU"
333
      ],
334
      "metadata": {
335
        "id": "KJJz9xXvb1Ej",
336
        "colab": {
337
          "base_uri": "https://localhost:8080/"
338
        },
339
        "outputId": "9533f802-804d-4c4c-c19f-87827bbc178c"
340
      },
341
      "id": "KJJz9xXvb1Ej",
342
      "execution_count": 12,
343
      "outputs": [
344
        {
345
          "output_type": "stream",
346
          "name": "stdout",
347
          "text": [
348
            "\u001b[?25l     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/70.7 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.7/70.7 kB\u001b[0m \u001b[31m2.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
349
            "\u001b[?25h"
350
          ]
351
        }
352
      ]
353
    },
354
    {
355
      "cell_type": "code",
356
      "execution_count": 13,
357
      "id": "9721648e",
358
      "metadata": {
359
        "id": "9721648e",
360
        "colab": {
361
          "base_uri": "https://localhost:8080/"
362
        },
363
        "outputId": "9e77c155-c662-47c2-b585-deb88e8ad51b"
364
      },
365
      "outputs": [
366
        {
367
          "output_type": "stream",
368
          "name": "stderr",
369
          "text": [
370
            "/usr/local/lib/python3.10/dist-packages/langchain_experimental/sql/base.py:66: UserWarning: Directly instantiating an SQLDatabaseChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method.\n",
371
            "  warnings.warn(\n"
372
          ]
373
        }
374
      ],
375
      "source": [
376
        "from langchain.utilities import SQLDatabase\n",
377
        "from langchain_experimental.sql import SQLDatabaseChain\n",
378
        "\n",
379
        "db = SQLDatabase(engine)\n",
380
        "sql_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)"
381
      ]
382
    },
383
    {
384
      "cell_type": "markdown",
385
      "id": "6325f758",
386
      "metadata": {
387
        "id": "6325f758"
388
      },
389
      "source": [
390
        "## Agent types"
391
      ]
392
    },
393
    {
394
      "cell_type": "markdown",
395
      "id": "4d732b7a",
396
      "metadata": {
397
        "id": "4d732b7a"
398
      },
399
      "source": [
400
        "In this section we will review several agents and see how they 'think' and what they can do."
401
      ]
402
    },
403
    {
404
      "cell_type": "markdown",
405
      "id": "8ee98631",
406
      "metadata": {
407
        "id": "8ee98631"
408
      },
409
      "source": [
410
        "Using one of langchain's pre-built agents involves three variables:\n",
411
        "* defining the tools or the toolkit\n",
412
        "* defining the llm\n",
413
        "* defining the agent type\n",
414
        "\n",
415
        "This is all really easy to do in langchain, as we will see in the following example."
416
      ]
417
    },
418
    {
419
      "cell_type": "markdown",
420
      "id": "1eaf306a",
421
      "metadata": {
422
        "id": "1eaf306a"
423
      },
424
      "source": [
425
        "### Agent type #1: Zero Shot React"
426
      ]
427
    },
428
    {
429
      "cell_type": "markdown",
430
      "source": [
431
        "In this first example we will use slightly different type of agent - SQL Agent which can be instantiated with it's own method `create_sql_agent`. Other agents will be instantiated in more generic way as we will see below in other examples.\n",
432
        "<br><br>\n",
433
        "This method uses *toolkit* instead of simple list of `tools`. You can read more about them in the [documentation](https://python.langchain.com/docs/modules/agents/toolkits/). For this use case, we will use `SQLDatabaseToolkit`."
434
      ],
435
      "metadata": {
436
        "id": "Tgn6dRLEcxli"
437
      },
438
      "id": "Tgn6dRLEcxli"
439
    },
440
    {
441
      "cell_type": "markdown",
442
      "source": [
443
        "As the name suggests, we will use this agent to perform 'zero shot' tasks on the input. That means that we will not have several, interdependent interactions but only one. In other words, this agent will have no memory."
444
      ],
445
      "metadata": {
446
        "id": "mie_37ERl_ac"
447
      },
448
      "id": "mie_37ERl_ac"
449
    },
450
    {
451
      "cell_type": "markdown",
452
      "source": [
453
        "Now we are ready to initialize the agent! We will use `verbose` in `True` so we can see what is our agent's 'thinking' process."
454
      ],
455
      "metadata": {
456
        "id": "O_pYBlCcl11l"
457
      },
458
      "id": "O_pYBlCcl11l"
459
    },
460
    {
461
      "cell_type": "markdown",
462
      "source": [
463
        "**Important Note:** *When interacting with agents it is really important to set the `max_iterations` parameters because agents can get stuck in infinite loops that consume plenty of tokens. The default value is 15 to allow for many tools and complex reasoning but for most applications you should keep it much lower.*"
464
      ],
465
      "metadata": {
466
        "id": "GtSMUCaIlupp"
467
      },
468
      "id": "GtSMUCaIlupp"
469
    },
470
    {
471
      "cell_type": "code",
472
      "source": [
473
        "from langchain.agents import create_sql_agent\n",
474
        "from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
475
        "from langchain.agents.agent_types import AgentType\n",
476
        "\n",
477
        "agent_executor = create_sql_agent(\n",
478
        "    llm=llm,\n",
479
        "    toolkit=SQLDatabaseToolkit(db=db, llm=llm),\n",
480
        "    verbose=True,\n",
481
        "    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
482
        "    max_iterations=3\n",
483
        ")"
484
      ],
485
      "metadata": {
486
        "id": "5Z4EmMmqiOvZ"
487
      },
488
      "id": "5Z4EmMmqiOvZ",
489
      "execution_count": 14,
490
      "outputs": []
491
    },
492
    {
493
      "cell_type": "markdown",
494
      "source": [
495
        "Let's see our newly created agent in action! We will ask it a question that involves a math operation over the stock prices."
496
      ],
497
      "metadata": {
498
        "id": "iBqz6aFzj-2d"
499
      },
500
      "id": "iBqz6aFzj-2d"
501
    },
502
    {
503
      "cell_type": "code",
504
      "source": [
505
        "result = count_tokens(\n",
506
        "    agent_executor,\n",
507
        "    \"What is the multiplication of the ratio between stock \" +\n",
508
        "    \"prices for 'ABC' and 'XYZ' in January 3rd and the ratio \" +\n",
509
        "    \"between the same stock prices in January the 4th?\"\n",
510
        ")"
511
      ],
512
      "metadata": {
513
        "colab": {
514
          "base_uri": "https://localhost:8080/"
515
        },
516
        "id": "MdvgpwHRic3W",
517
        "outputId": "5e63199b-2d92-404a-c712-5e8f4f55e23d"
518
      },
519
      "id": "MdvgpwHRic3W",
520
      "execution_count": 15,
521
      "outputs": [
522
        {
523
          "output_type": "stream",
524
          "name": "stdout",
525
          "text": [
526
            "\n",
527
            "\n",
528
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
529
            "\u001b[32;1m\u001b[1;3mAction: sql_db_list_tables\n",
530
            "Action Input: \u001b[0m\n",
531
            "Observation: \u001b[38;5;200m\u001b[1;3mstocks\u001b[0m\n",
532
            "Thought:\u001b[32;1m\u001b[1;3m I should query the schema of the stocks table to see what columns I can use.\n",
533
            "Action: sql_db_schema\n",
534
            "Action Input: stocks\u001b[0m\n",
535
            "Observation: \u001b[33;1m\u001b[1;3m\n",
536
            "CREATE TABLE stocks (\n",
537
            "\tobs_id INTEGER NOT NULL, \n",
538
            "\tstock_ticker VARCHAR(4) NOT NULL, \n",
539
            "\tprice FLOAT NOT NULL, \n",
540
            "\tdate DATE NOT NULL, \n",
541
            "\tPRIMARY KEY (obs_id)\n",
542
            ")\n",
543
            "\n",
544
            "/*\n",
545
            "3 rows from stocks table:\n",
546
            "obs_id\tstock_ticker\tprice\tdate\n",
547
            "1\tABC\t200.0\t2023-01-01\n",
548
            "2\tABC\t208.0\t2023-01-02\n",
549
            "3\tABC\t232.0\t2023-01-03\n",
550
            "*/\u001b[0m\n",
551
            "Thought:\u001b[32;1m\u001b[1;3m I should query the ratio between the stock prices for 'ABC' and 'XYZ' in January 3rd and the ratio between the same stock prices in January the 4th.\n",
552
            "Action: sql_db_query\n",
553
            "Action Input: SELECT (SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-03') / (SELECT price FROM stocks WHERE stock_ticker = 'XYZ' AND date = '2023-01-03') / (SELECT price FROM stocks WHERE stock_ticker = 'ABC' AND date = '2023-01-04') * (SELECT price FROM stocks WHERE stock_ticker = 'XYZ' AND date = '2023-01-04') AS ratio;\u001b[0m\n",
554
            "Observation: \u001b[36;1m\u001b[1;3m[(1.027234753550543,)]\u001b[0m\n",
555
            "Thought:\u001b[32;1m\u001b[1;3m\u001b[0m\n",
556
            "\n",
557
            "\u001b[1m> Finished chain.\u001b[0m\n",
558
            "Spent a total of 2398 tokens\n"
559
          ]
560
        }
561
      ]
562
    },
563
    {
564
      "cell_type": "markdown",
565
      "source": [
566
        "As always, let's see what the prompt is here:"
567
      ],
568
      "metadata": {
569
        "id": "LqzuNMgWmFXS"
570
      },
571
      "id": "LqzuNMgWmFXS"
572
    },
573
    {
574
      "cell_type": "code",
575
      "source": [
576
        "print(agent_executor.agent.llm_chain.prompt.template)"
577
      ],
578
      "metadata": {
579
        "colab": {
580
          "base_uri": "https://localhost:8080/"
581
        },
582
        "id": "ny7m_wMCmF4b",
583
        "outputId": "ab8fdf8a-6f83-4f3f-e9dc-7313b48f81ef"
584
      },
585
      "id": "ny7m_wMCmF4b",
586
      "execution_count": 16,
587
      "outputs": [
588
        {
589
          "output_type": "stream",
590
          "name": "stdout",
591
          "text": [
592
            "You are an agent designed to interact with a SQL database.\n",
593
            "Given an input question, create a syntactically correct sqlite query to run, then look at the results of the query and return the answer.\n",
594
            "Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results.\n",
595
            "You can order the results by a relevant column to return the most interesting examples in the database.\n",
596
            "Never query for all the columns from a specific table, only ask for the relevant columns given the question.\n",
597
            "You have access to tools for interacting with the database.\n",
598
            "Only use the below tools. Only use the information returned by the below tools to construct your final answer.\n",
599
            "You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n",
600
            "\n",
601
            "DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n",
602
            "\n",
603
            "If the question does not seem related to the database, just return \"I don't know\" as the answer.\n",
604
            "\n",
605
            "\n",
606
            "sql_db_query: Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', using sql_db_schema to query the correct table fields.\n",
607
            "sql_db_schema: Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: 'table1, table2, table3'\n",
608
            "sql_db_list_tables: Input is an empty string, output is a comma separated list of tables in the database.\n",
609
            "sql_db_query_checker: Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!\n",
610
            "\n",
611
            "Use the following format:\n",
612
            "\n",
613
            "Question: the input question you must answer\n",
614
            "Thought: you should always think about what to do\n",
615
            "Action: the action to take, should be one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker]\n",
616
            "Action Input: the input to the action\n",
617
            "Observation: the result of the action\n",
618
            "... (this Thought/Action/Action Input/Observation can repeat N times)\n",
619
            "Thought: I now know the final answer\n",
620
            "Final Answer: the final answer to the original input question\n",
621
            "\n",
622
            "Begin!\n",
623
            "\n",
624
            "Question: {input}\n",
625
            "Thought: I should look at the tables in the database to see what I can query.  Then I should query the schema of the most relevant tables.\n",
626
            "{agent_scratchpad}\n"
627
          ]
628
        }
629
      ]
630
    },
631
    {
632
      "cell_type": "markdown",
633
      "id": "5b2b47ed",
634
      "metadata": {
635
        "id": "5b2b47ed"
636
      },
637
      "source": [
638
        "The question we must ask ourselves here is: how are agents different than chains?"
639
      ]
640
    },
641
    {
642
      "cell_type": "markdown",
643
      "id": "f73aaf07",
644
      "metadata": {
645
        "id": "f73aaf07"
646
      },
647
      "source": [
648
        "If we look at the agent's logic and the prompt we have just printed we will see some clear differences. First, we have the tools which are included in the prompt. Second we have a thought process which was before was immediate in chains but now involves a 'thought', 'action', 'action input', 'observation' sequence. What is this all about?"
649
      ]
650
    },
651
    {
652
      "cell_type": "markdown",
653
      "id": "3a698f78",
654
      "metadata": {
655
        "id": "3a698f78"
656
      },
657
      "source": [
658
        "Suffice it to say for now that **the LLM now has the ability to 'reason' on how to best use tools** to solve our query and can combine them in intelligent ways with just a brief description of each of them. If you want to learn more about this paradigm (MRKL) in detail, please refer to [this](https://arxiv.org/pdf/2205.00445.pdf) paper."
659
      ]
660
    },
661
    {
662
      "cell_type": "markdown",
663
      "id": "aff9e65d",
664
      "metadata": {
665
        "id": "aff9e65d"
666
      },
667
      "source": [
668
        "Finally, let's pay attention to the 'agent_scratchpad'. What is that? Well, that is where we will be appending every thought or action that the agent has already performed. In this way, at each point in time, the agent will know what it has found out and will be able to continue its thought process. In other words, after using a tool it adds its thoughts and observations to the scratchpad and picks up from there."
669
      ]
670
    },
671
    {
672
      "cell_type": "markdown",
673
      "id": "04d70642",
674
      "metadata": {
675
        "id": "04d70642"
676
      },
677
      "source": [
678
        "### Agent type #2: Conversational React"
679
      ]
680
    },
681
    {
682
      "cell_type": "markdown",
683
      "id": "1ec589b1",
684
      "metadata": {
685
        "id": "1ec589b1"
686
      },
687
      "source": [
688
        "The zero shot agent is really interesting but, as we said before, it has no memory. What if we want an assistant that remembers things we have talked about and can also reason about them and use tools? For that we have the conversational react agent."
689
      ]
690
    },
691
    {
692
      "cell_type": "markdown",
693
      "id": "5c26c50f",
694
      "metadata": {
695
        "id": "5c26c50f"
696
      },
697
      "source": [
698
        "We will use the math tool in this example and load it as below:"
699
      ]
700
    },
701
    {
702
      "cell_type": "code",
703
      "execution_count": 17,
704
      "id": "4b6faff3",
705
      "metadata": {
706
        "id": "4b6faff3"
707
      },
708
      "outputs": [],
709
      "source": [
710
        "from langchain.agents import load_tools\n",
711
        "\n",
712
        "tools = load_tools(\n",
713
        "    [\"llm-math\"],\n",
714
        "    llm=llm\n",
715
        ")"
716
      ]
717
    },
718
    {
719
      "cell_type": "markdown",
720
      "id": "be45b903",
721
      "metadata": {
722
        "id": "be45b903"
723
      },
724
      "source": [
725
        "The memory type being used here is a simple buffer memory to allow us to remember previous steps in the reasoning chain. For more information on memory, please refer to the 3rd chapter of this series."
726
      ]
727
    },
728
    {
729
      "cell_type": "code",
730
      "execution_count": 18,
731
      "id": "0aff4edf",
732
      "metadata": {
733
        "id": "0aff4edf"
734
      },
735
      "outputs": [],
736
      "source": [
737
        "from langchain.memory import ConversationBufferMemory\n",
738
        "\n",
739
        "memory = ConversationBufferMemory(memory_key=\"chat_history\")"
740
      ]
741
    },
742
    {
743
      "cell_type": "code",
744
      "execution_count": 19,
745
      "id": "6579cef0",
746
      "metadata": {
747
        "id": "6579cef0"
748
      },
749
      "outputs": [],
750
      "source": [
751
        "from langchain.agents import initialize_agent\n",
752
        "\n",
753
        "conversational_agent = initialize_agent(\n",
754
        "    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n",
755
        "    tools=tools,\n",
756
        "    llm=llm,\n",
757
        "    verbose=True,\n",
758
        "    max_iterations=3,\n",
759
        "    memory=memory,\n",
760
        ")"
761
      ]
762
    },
763
    {
764
      "cell_type": "code",
765
      "execution_count": 20,
766
      "id": "cabbea50",
767
      "metadata": {
768
        "colab": {
769
          "base_uri": "https://localhost:8080/"
770
        },
771
        "id": "cabbea50",
772
        "outputId": "760bde78-a553-46e0-da3b-433cd0196613"
773
      },
774
      "outputs": [
775
        {
776
          "output_type": "stream",
777
          "name": "stdout",
778
          "text": [
779
            "\n",
780
            "\n",
781
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
782
            "\u001b[32;1m\u001b[1;3m\n",
783
            "Thought: Do I need to use a tool? Yes\n",
784
            "Action: Calculator\n",
785
            "Action Input: 10,000, 8%, 5 years\u001b[0m\n",
786
            "Observation: \u001b[36;1m\u001b[1;3mAnswer: 14693.280768000006\u001b[0m\n",
787
            "Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? No\n",
788
            "AI: The result of an investment of $10,000 growing at 8% annually for 5 years with compound interest is $14,693.28.\u001b[0m\n",
789
            "\n",
790
            "\u001b[1m> Finished chain.\u001b[0m\n",
791
            "Spent a total of 1246 tokens\n"
792
          ]
793
        }
794
      ],
795
      "source": [
796
        "result = count_tokens(\n",
797
        "    conversational_agent,\n",
798
        "    \"What's the result of an investment of $10,000 growing at 8% annually for 5 years with compound interest?\"\n",
799
        ")"
800
      ]
801
    },
802
    {
803
      "cell_type": "markdown",
804
      "id": "bdcd9847",
805
      "metadata": {
806
        "id": "bdcd9847"
807
      },
808
      "source": [
809
        "As we can see below, the prompt is similar but it includes a great prelude of instructions that make it an effective assistant as well + a spot for including the chat history from the memory component:"
810
      ]
811
    },
812
    {
813
      "cell_type": "code",
814
      "execution_count": 21,
815
      "id": "de413fd3",
816
      "metadata": {
817
        "colab": {
818
          "base_uri": "https://localhost:8080/"
819
        },
820
        "id": "de413fd3",
821
        "outputId": "f79dc083-db80-4dbb-8fb6-da5f6f603645"
822
      },
823
      "outputs": [
824
        {
825
          "output_type": "stream",
826
          "name": "stdout",
827
          "text": [
828
            "Assistant is a large language model trained by OpenAI.\n",
829
            "\n",
830
            "Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n",
831
            "\n",
832
            "Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n",
833
            "\n",
834
            "Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n",
835
            "\n",
836
            "TOOLS:\n",
837
            "------\n",
838
            "\n",
839
            "Assistant has access to the following tools:\n",
840
            "\n",
841
            "> Calculator: Useful for when you need to answer questions about math.\n",
842
            "\n",
843
            "To use a tool, please use the following format:\n",
844
            "\n",
845
            "```\n",
846
            "Thought: Do I need to use a tool? Yes\n",
847
            "Action: the action to take, should be one of [Calculator]\n",
848
            "Action Input: the input to the action\n",
849
            "Observation: the result of the action\n",
850
            "```\n",
851
            "\n",
852
            "When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n",
853
            "\n",
854
            "```\n",
855
            "Thought: Do I need to use a tool? No\n",
856
            "AI: [your response here]\n",
857
            "```\n",
858
            "\n",
859
            "Begin!\n",
860
            "\n",
861
            "Previous conversation history:\n",
862
            "{chat_history}\n",
863
            "\n",
864
            "New input: {input}\n",
865
            "{agent_scratchpad}\n"
866
          ]
867
        }
868
      ],
869
      "source": [
870
        "print(conversational_agent.agent.llm_chain.prompt.template)"
871
      ]
872
    },
873
    {
874
      "cell_type": "markdown",
875
      "id": "32ee8041",
876
      "metadata": {
877
        "id": "32ee8041"
878
      },
879
      "source": [
880
        "Let's see what happens if we try to answer the question that is related to the previous one:"
881
      ]
882
    },
883
    {
884
      "cell_type": "code",
885
      "execution_count": 22,
886
      "id": "5e109878",
887
      "metadata": {
888
        "colab": {
889
          "base_uri": "https://localhost:8080/"
890
        },
891
        "id": "5e109878",
892
        "outputId": "51a80167-1e1b-4717-e3bc-bf25ad1c6cb7"
893
      },
894
      "outputs": [
895
        {
896
          "output_type": "stream",
897
          "name": "stdout",
898
          "text": [
899
            "\n",
900
            "\n",
901
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
902
            "\u001b[32;1m\u001b[1;3m\n",
903
            "Thought: Do I need to use a tool? Yes\n",
904
            "Action: Calculator\n",
905
            "Action Input: 15000, 8%, 5 years, compound interest\u001b[0m\n",
906
            "Observation: \u001b[36;1m\u001b[1;3mAnswer: 22039.92115200001\u001b[0m\n",
907
            "Thought:\u001b[32;1m\u001b[1;3m Do I need to use a tool? No\n",
908
            "AI: If we start with $15,000 instead and follow the same 8% annual growth for 5 years with compound interest, we would have $22,039.92, which is $7,346.64 more than the previous scenario.\u001b[0m\n",
909
            "\n",
910
            "\u001b[1m> Finished chain.\u001b[0m\n",
911
            "Spent a total of 1416 tokens\n"
912
          ]
913
        }
914
      ],
915
      "source": [
916
        "result = count_tokens(\n",
917
        "    conversational_agent,\n",
918
        "    \"If we start with $15,000 instead and follow the same 8% annual growth for 5 years with compound interest, how much more would we have compared to the previous scenario?\"\n",
919
        ")"
920
      ]
921
    },
922
    {
923
      "cell_type": "markdown",
924
      "id": "44135b8d",
925
      "metadata": {
926
        "id": "44135b8d"
927
      },
928
      "source": [
929
        "### Agent type #3: React Docstore"
930
      ]
931
    },
932
    {
933
      "cell_type": "markdown",
934
      "id": "1e6d0d13",
935
      "metadata": {
936
        "id": "1e6d0d13"
937
      },
938
      "source": [
939
        "This type of agent is similar to the ones we have seen so far but it includes the interaction with a docstore. It will have two and only two tools at its disposal: 'Search' and 'Lookup'."
940
      ]
941
    },
942
    {
943
      "cell_type": "markdown",
944
      "id": "d569fd41",
945
      "metadata": {
946
        "id": "d569fd41"
947
      },
948
      "source": [
949
        "With 'Search' it will bring up a relevant article and with 'Lookup' the agent will find the right piece of information in the article. This is probably easiest to see in an example:"
950
      ]
951
    },
952
    {
953
      "cell_type": "code",
954
      "execution_count": 23,
955
      "id": "ecc452af",
956
      "metadata": {
957
        "id": "ecc452af"
958
      },
959
      "outputs": [],
960
      "source": [
961
        "from langchain import Wikipedia\n",
962
        "from langchain.agents.react.base import DocstoreExplorer, Tool\n",
963
        "\n",
964
        "docstore=DocstoreExplorer(Wikipedia())\n",
965
        "tools = [\n",
966
        "    Tool(\n",
967
        "        name=\"Search\",\n",
968
        "        func=docstore.search,\n",
969
        "        description='search wikipedia'\n",
970
        "    ),\n",
971
        "    Tool(\n",
972
        "        name=\"Lookup\",\n",
973
        "        func=docstore.lookup,\n",
974
        "        description='lookup a term in wikipedia'\n",
975
        "    )\n",
976
        "]"
977
      ]
978
    },
979
    {
980
      "cell_type": "code",
981
      "execution_count": 24,
982
      "id": "595938a1",
983
      "metadata": {
984
        "id": "595938a1"
985
      },
986
      "outputs": [],
987
      "source": [
988
        "docstore_agent = initialize_agent(\n",
989
        "    tools,\n",
990
        "    llm,\n",
991
        "    agent=\"react-docstore\",\n",
992
        "    verbose=True,\n",
993
        "    max_iterations=3\n",
994
        ")"
995
      ]
996
    },
997
    {
998
      "cell_type": "code",
999
      "execution_count": 25,
1000
      "id": "bba6b065",
1001
      "metadata": {
1002
        "colab": {
1003
          "base_uri": "https://localhost:8080/"
1004
        },
1005
        "id": "bba6b065",
1006
        "outputId": "52d4f8f6-b56b-4894-9f18-dc126be884b4"
1007
      },
1008
      "outputs": [
1009
        {
1010
          "output_type": "stream",
1011
          "name": "stdout",
1012
          "text": [
1013
            "\n",
1014
            "\n",
1015
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
1016
            "\u001b[32;1m\u001b[1;3mThought: I need to search Archimedes and find his last words.\n",
1017
            "Action: Search[Archimedes]\u001b[0m\n",
1018
            "Observation: \u001b[36;1m\u001b[1;3mArchimedes of Syracuse (, ARK-ihm-EE-deez; c. 287 – c. 212 BC) was an Ancient Greek mathematician, physicist, engineer, astronomer, and inventor from the ancient city of Syracuse in Sicily. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Considered the greatest mathematician of ancient history, and one of the greatest of all time, Archimedes anticipated modern calculus and analysis by applying the concept of the infinitely small and the method of exhaustion to derive and rigorously prove a range of geometrical theorems. These include the area of a circle, the surface area and volume of a sphere, the area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.Archimedes' other mathematical achievements include deriving an approximation of pi, defining and investigating the Archimedean spiral, and devising a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, working on statics and hydrostatics. Archimedes' achievements in this area include a proof of the law of the lever, the widespread use of the concept of center of gravity, and the enunciation of the law of buoyancy or Archimedes' principle. He is also credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion.\n",
1019
            "Archimedes died during the siege of Syracuse, when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting Archimedes' tomb, which was surmounted by a sphere and a cylinder that Archimedes requested be placed there to represent his mathematical discoveries.\n",
1020
            "Unlike his inventions, Archimedes' mathematical writings were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made until c. 530 AD by Isidore of Miletus in Byzantine Constantinople, while commentaries on the works of Archimedes by Eutocius in the 6th century opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance and again in the 17th century, while the discovery in 1906 of previously lost works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.\u001b[0m\n",
1021
            "Thought:\u001b[32;1m\u001b[1;3m The paragraph does not mention Archimedes' last words. I need to look up \"last words\".\n",
1022
            "Action: Lookup[last words]\u001b[0m\n",
1023
            "Observation: \u001b[33;1m\u001b[1;3m(Result 1/1) \n",
1024
            "== Biography ==\n",
1025
            "Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years before his death in 212 BC. In the Sand-Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing else is known. A biography of Archimedes was written by his friend Heracleides, but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he ever married or had children, or if he ever visited Alexandria, Egypt, during his youth. From his surviving written works, it is clear that he maintained collegiate relations with scholars based there, including his friend Conon of Samos and the head librarian Eratosthenes of Cyrene.The standard versions of Archimedes' life were written long after his death by Greek and Roman historians. The earliest reference to Archimedes occurs in The Histories by Polybius (c. 200–118 BC), written about 70 years after his death. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city from the Romans. Polybius remarks how, during the Second Punic War, Syracuse switched allegiances from Rome to Carthage, resulting in a military campaign under the command of Marcus Claudius Marcellus and Appius Claudius Pulcher, who besieged the city from 213 to 212 BC. He notes that the Romans underestimated Syracuse's defenses, and mentions several machines Archimedes designed, including improved catapults, crane-like machines that could be swung around in an arc, and other stone-throwers. Although the Romans ultimately captured the city, they suffered considerable losses due to Archimedes' inventiveness.Cicero (106–43 BC) mentions Archimedes in some of his works. While serving as a quaestor in Sicily, Cicero found what was presumed to be Archimedes' tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up and was able to see the carving and read some of the verses that had been added as an inscription. The tomb carried a sculpture illustrating Archimedes' favorite mathematical proof, that the volume and surface area of the sphere are two-thirds that of an enclosing cylinder including its bases. He also mentions that Marcellus brought to Rome two planetariums Archimedes built. The Roman historian Livy (59 BC–17 AD) retells Polybius' story of the capture of Syracuse and Archimedes' role in it.\n",
1026
            "Plutarch (45–119 AD) wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse. He also provides at least two accounts on how Archimedes died after the city was taken. According to the most popular account, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet Marcellus, but he declined, saying that he had to finish working on the problem. This enraged the soldier, who killed Archimedes with his sword. Another story has Archimedes carrying mathematical instruments before being killed because a soldier thought they were valuable items. Marcellus was reportedly angered by Archimedes' death, as he considered him a valuable scientific asset (he called Archimedes \"a geometrical Briareus\") and had ordered that he should not be harmed.The last words attributed to Archimedes are \"Do not disturb my circles\" (Latin, \"Noli turbare circulos meos\"; Katharevousa Greek, \"μὴ μου τοὺς κύκλους τάραττε\"), a reference to the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. There is no reliable evidence that Archimedes uttered these words and they do not appear in Plutarch's account. A similar quotation is found in the work of Valerius Maximus (fl. 30 AD), who wrote in Memorable Doings and Sayings, \"... sed protecto manibus puluere 'noli' inquit, 'obsecro, istum disturbare'\" (\"... but protecting the dust with his hands, said 'I beg of you, do not disturb this'\").\u001b[0m\n",
1027
            "Thought:\u001b[32;1m\u001b[1;3m The paragraph mentions that the last words attributed to Archimedes are \"Do not disturb my circles\". So the answer is \"Do not disturb my circles\".\n",
1028
            "Action: Finish[Do not disturb my circles]\u001b[0m\n",
1029
            "\n",
1030
            "\u001b[1m> Finished chain.\u001b[0m\n",
1031
            "Spent a total of 6692 tokens\n"
1032
          ]
1033
        },
1034
        {
1035
          "output_type": "execute_result",
1036
          "data": {
1037
            "text/plain": [
1038
              "{'input': \"What were Archimedes' last words?\",\n",
1039
              " 'output': 'Do not disturb my circles'}"
1040
            ]
1041
          },
1042
          "metadata": {},
1043
          "execution_count": 25
1044
        }
1045
      ],
1046
      "source": [
1047
        "count_tokens(docstore_agent, \"What were Archimedes' last words?\")"
1048
      ]
1049
    },
1050
    {
1051
      "cell_type": "markdown",
1052
      "id": "6a5f77ac",
1053
      "metadata": {
1054
        "id": "6a5f77ac"
1055
      },
1056
      "source": [
1057
        "We will not print the prompt here because it is too large, but you can see it yourself if you want to (we know how to already).\n",
1058
        "\n",
1059
        "In short, it contains several examples of the `Question` > `Thought` > `Action` > `Observation` loop, that include the `Search` and `Lookup` tools.\n",
1060
        "\n",
1061
        "If you want to learn more about this approach [this](https://arxiv.org/pdf/2210.03629.pdf) is the paper for ReAct"
1062
      ]
1063
    },
1064
    {
1065
      "cell_type": "markdown",
1066
      "id": "4aae3b5b",
1067
      "metadata": {
1068
        "id": "4aae3b5b"
1069
      },
1070
      "source": [
1071
        "### Agent type #4: Self Ask with Search"
1072
      ]
1073
    },
1074
    {
1075
      "cell_type": "markdown",
1076
      "id": "c07039e5",
1077
      "metadata": {
1078
        "id": "c07039e5"
1079
      },
1080
      "source": [
1081
        "This is the first-choice agent to use when using LLM's to extract information with a search engine. The agent will ask follow-up questions and use the search functionality to get intermediate answers that help it get to a final answer."
1082
      ]
1083
    },
1084
    {
1085
      "cell_type": "code",
1086
      "execution_count": 26,
1087
      "id": "903660b2",
1088
      "metadata": {
1089
        "id": "903660b2"
1090
      },
1091
      "outputs": [],
1092
      "source": [
1093
        "from langchain import OpenAI, SerpAPIWrapper\n",
1094
        "from langchain.agents import initialize_agent, Tool\n",
1095
        "\n",
1096
        "search = SerpAPIWrapper(serpapi_api_key='api_key')\n",
1097
        "tools = [\n",
1098
        "    Tool(\n",
1099
        "        name=\"Intermediate Answer\",\n",
1100
        "        func=search.run,\n",
1101
        "        description='google search'\n",
1102
        "    )\n",
1103
        "]\n",
1104
        "\n",
1105
        "self_ask_with_search = initialize_agent(tools, llm, agent=\"self-ask-with-search\", verbose=True)"
1106
      ]
1107
    },
1108
    {
1109
      "cell_type": "markdown",
1110
      "id": "0ec1c4c8",
1111
      "metadata": {
1112
        "id": "0ec1c4c8"
1113
      },
1114
      "source": [
1115
        "We will not interact with this agent because for that we would need a serpapi key. However, by checking out the prompt we can see a few examples of how it works:"
1116
      ]
1117
    },
1118
    {
1119
      "cell_type": "code",
1120
      "execution_count": 27,
1121
      "id": "e4134930",
1122
      "metadata": {
1123
        "id": "e4134930",
1124
        "outputId": "2f58884d-0c4e-4b40-de45-6928e0c60ef4",
1125
        "colab": {
1126
          "base_uri": "https://localhost:8080/"
1127
        }
1128
      },
1129
      "outputs": [
1130
        {
1131
          "output_type": "stream",
1132
          "name": "stdout",
1133
          "text": [
1134
            "Question: Who lived longer, Muhammad Ali or Alan Turing?\n",
1135
            "Are follow up questions needed here: Yes.\n",
1136
            "Follow up: How old was Muhammad Ali when he died?\n",
1137
            "Intermediate answer: Muhammad Ali was 74 years old when he died.\n",
1138
            "Follow up: How old was Alan Turing when he died?\n",
1139
            "Intermediate answer: Alan Turing was 41 years old when he died.\n",
1140
            "So the final answer is: Muhammad Ali\n",
1141
            "\n",
1142
            "Question: When was the founder of craigslist born?\n",
1143
            "Are follow up questions needed here: Yes.\n",
1144
            "Follow up: Who was the founder of craigslist?\n",
1145
            "Intermediate answer: Craigslist was founded by Craig Newmark.\n",
1146
            "Follow up: When was Craig Newmark born?\n",
1147
            "Intermediate answer: Craig Newmark was born on December 6, 1952.\n",
1148
            "So the final answer is: December 6, 1952\n",
1149
            "\n",
1150
            "Question: Who was the maternal grandfather of George Washington?\n",
1151
            "Are follow up questions needed here: Yes.\n",
1152
            "Follow up: Who was the mother of George Washington?\n",
1153
            "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n",
1154
            "Follow up: Who was the father of Mary Ball Washington?\n",
1155
            "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n",
1156
            "So the final answer is: Joseph Ball\n",
1157
            "\n",
1158
            "Question: Are both the directors of Jaws and Casino Royale from the same country?\n",
1159
            "Are follow up questions needed here: Yes.\n",
1160
            "Follow up: Who is the director of Jaws?\n",
1161
            "Intermediate answer: The director of Jaws is Steven Spielberg.\n",
1162
            "Follow up: Where is Steven Spielberg from?\n",
1163
            "Intermediate answer: The United States.\n",
1164
            "Follow up: Who is the director of Casino Royale?\n",
1165
            "Intermediate answer: The director of Casino Royale is Martin Campbell.\n",
1166
            "Follow up: Where is Martin Campbell from?\n",
1167
            "Intermediate answer: New Zealand.\n",
1168
            "So the final answer is: No\n",
1169
            "\n",
1170
            "Question: {input}\n",
1171
            "Are followup questions needed here:{agent_scratchpad}\n"
1172
          ]
1173
        }
1174
      ],
1175
      "source": [
1176
        "print(self_ask_with_search.agent.llm_chain.prompt.template)"
1177
      ]
1178
    },
1179
    {
1180
      "cell_type": "markdown",
1181
      "id": "3e7790ae",
1182
      "metadata": {
1183
        "id": "3e7790ae"
1184
      },
1185
      "source": [
1186
        "As we can see, the prompt is basically a series of many examples to show the LLM how to ask follow up questions to a search tool until it can get to the final answer."
1187
      ]
1188
    },
1189
    {
1190
      "cell_type": "markdown",
1191
      "id": "5df324ba",
1192
      "metadata": {
1193
        "id": "5df324ba"
1194
      },
1195
      "source": [
1196
        "And.. again [here](https://arxiv.org/pdf/2210.03350.pdf) you have the paper to dive deeper!"
1197
      ]
1198
    },
1199
    {
1200
      "cell_type": "markdown",
1201
      "id": "4cd80a08",
1202
      "metadata": {
1203
        "id": "4cd80a08"
1204
      },
1205
      "source": [
1206
        "### Wrapping up"
1207
      ]
1208
    },
1209
    {
1210
      "cell_type": "markdown",
1211
      "id": "97d108e2",
1212
      "metadata": {
1213
        "id": "97d108e2"
1214
      },
1215
      "source": [
1216
        "And that all for agents! There's many other things you can do with agents, just to name a few:\n",
1217
        "* Create your own custom agent\n",
1218
        "* Use them with many other tools (even custom ones)\n",
1219
        "* Trace every call an agent makes through a convinient UI interface"
1220
      ]
1221
    },
1222
    {
1223
      "cell_type": "markdown",
1224
      "id": "e33dd90a",
1225
      "metadata": {
1226
        "id": "e33dd90a"
1227
      },
1228
      "source": [
1229
        "Check the how-to [guides](https://langchain.readthedocs.io/en/latest/modules/agents/how_to_guides.html) for more!"
1230
      ]
1231
    }
1232
  ],
1233
  "metadata": {
1234
    "colab": {
1235
      "provenance": []
1236
    },
1237
    "kernelspec": {
1238
      "display_name": "Python 3",
1239
      "name": "python3"
1240
    },
1241
    "language_info": {
1242
      "name": "python"
1243
    },
1244
    "vscode": {
1245
      "interpreter": {
1246
        "hash": "578e1e8dce4dc6c542f1ea2d66a2d9db6ef592936dcc314004bdae386f827d38"
1247
      }
1248
    }
1249
  },
1250
  "nbformat": 4,
1251
  "nbformat_minor": 5
1252
}

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.