examples

Форк
0
478 строк · 14.1 Кб
1
{
2
 "cells": [
3
  {
4
   "attachments": {},
5
   "cell_type": "markdown",
6
   "metadata": {},
7
   "source": [
8
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/09-langchain-streaming/09-langchain-streaming.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/09-langchain-streaming/09-langchain-streaming.ipynb)"
9
   ]
10
  },
11
  {
12
   "attachments": {},
13
   "cell_type": "markdown",
14
   "metadata": {},
15
   "source": [
16
    "#### [LangChain Handbook](https://pinecone.io/learn/langchain)\n",
17
    "\n",
18
    "# Streaming\n",
19
    "\n",
20
    "For LLMs, streaming has become an increasingly popular feature. The idea is to rapidly return tokens as an LLM is generating them, rather than waiting for a full response to be created before returning anything.\n",
21
    "\n",
22
    "Streaming is actually very easy to implement for simple use-cases, but it can get complicated when we start including things like Agents which have their own logic running which can block our attempts at streaming. Fortunately, we can make it work — it just requires a little extra effort.\n",
23
    "\n",
24
    "We'll start easy by implementing streaming to the terminal for LLMs, but by the end of the notebook we'll be handling the more complex task of streaming via FastAPI for Agents.\n",
25
    "\n",
26
    "First, let's install all of the libraries we'll be using."
27
   ]
28
  },
29
  {
30
   "cell_type": "code",
31
   "execution_count": null,
32
   "metadata": {},
33
   "outputs": [],
34
   "source": [
35
    "!pip install -qU \\\n",
36
    "    openai==0.28.0 \\\n",
37
    "    langchain==0.0.301 \\\n",
38
    "    fastapi==0.103.1 \\\n",
39
    "    \"uvicorn[standard]\"==0.23.2"
40
   ]
41
  },
42
  {
43
   "attachments": {},
44
   "cell_type": "markdown",
45
   "metadata": {},
46
   "source": [
47
    "## LLM Streaming to Stdout"
48
   ]
49
  },
50
  {
51
   "attachments": {},
52
   "cell_type": "markdown",
53
   "metadata": {},
54
   "source": [
55
    "The simplest form of streaming is to simply \"print\" the tokens as they're generated. To set this up we need to initialize an LLM (one that supports streaming, not all do) with two specific parameters:\n",
56
    "\n",
57
    "* `streaming=True`, to enable streaming\n",
58
    "* `callbacks=[SomeCallBackHere()]`, where we pass a LangChain callback class (or list containing multiple).\n",
59
    "\n",
60
    "The `streaming` parameter is self-explanatory. The `callbacks` parameter and callback classes less so — essentially they act as little bits of code that do something as each token from our LLM is generated. As mentioned, the simplest form of streaming is to print the tokens as they're being generated, like with the `StreamingStdOutCallbackHandler`."
61
   ]
62
  },
63
  {
64
   "cell_type": "code",
65
   "execution_count": 1,
66
   "metadata": {},
67
   "outputs": [],
68
   "source": [
69
    "import os\n",
70
    "from langchain.chat_models import ChatOpenAI\n",
71
    "from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
72
    "\n",
73
    "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\") or \"YOUR_API_KEY\"\n",
74
    "\n",
75
    "llm = ChatOpenAI(\n",
76
    "    openai_api_key=os.getenv(\"OPENAI_API_KEY\"),\n",
77
    "    temperature=0.0,\n",
78
    "    model_name=\"gpt-3.5-turbo\",\n",
79
    "    streaming=True,  # ! important\n",
80
    "    callbacks=[StreamingStdOutCallbackHandler()]  # ! important\n",
81
    ")"
82
   ]
83
  },
84
  {
85
   "attachments": {},
86
   "cell_type": "markdown",
87
   "metadata": {},
88
   "source": [
89
    "Now if we run the LLM we'll see the response being _streamed_."
90
   ]
91
  },
92
  {
93
   "cell_type": "code",
94
   "execution_count": null,
95
   "metadata": {},
96
   "outputs": [],
97
   "source": [
98
    "from langchain.schema import HumanMessage\n",
99
    "\n",
100
    "# create messages to be passed to chat LLM\n",
101
    "messages = [HumanMessage(content=\"tell me a long story\")]\n",
102
    "\n",
103
    "llm(messages)"
104
   ]
105
  },
106
  {
107
   "attachments": {},
108
   "cell_type": "markdown",
109
   "metadata": {},
110
   "source": [
111
    "That was surprisingly easy, but things begin to get much more complicated as soon as we begin using agents. Let's first initialize an agent."
112
   ]
113
  },
114
  {
115
   "cell_type": "code",
116
   "execution_count": null,
117
   "metadata": {},
118
   "outputs": [],
119
   "source": [
120
    "from langchain.memory import ConversationBufferWindowMemory\n",
121
    "from langchain.agents import load_tools, AgentType, initialize_agent\n",
122
    "\n",
123
    "# initialize conversational memory\n",
124
    "memory = ConversationBufferWindowMemory(\n",
125
    "    memory_key=\"chat_history\",\n",
126
    "    k=5,\n",
127
    "    return_messages=True,\n",
128
    "    output_key=\"output\"\n",
129
    ")\n",
130
    "\n",
131
    "# create a single tool to see how it impacts streaming\n",
132
    "tools = load_tools([\"llm-math\"], llm=llm)\n",
133
    "\n",
134
    "# initialize the agent\n",
135
    "agent = initialize_agent(\n",
136
    "    agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,\n",
137
    "    tools=tools,\n",
138
    "    llm=llm,\n",
139
    "    memory=memory,\n",
140
    "    verbose=True,\n",
141
    "    max_iterations=3,\n",
142
    "    early_stopping_method=\"generate\",\n",
143
    "    return_intermediate_steps=False\n",
144
    ")"
145
   ]
146
  },
147
  {
148
   "attachments": {},
149
   "cell_type": "markdown",
150
   "metadata": {},
151
   "source": [
152
    "We already added our `StreamingStdOutCallbackHandler` to the agent as we initialized the agent with the same `llm` as we created with that callback. So let's see what we get when running the agent."
153
   ]
154
  },
155
  {
156
   "cell_type": "code",
157
   "execution_count": null,
158
   "metadata": {},
159
   "outputs": [],
160
   "source": [
161
    "prompt = \"Hello, how are you?\"\n",
162
    "\n",
163
    "agent(prompt)"
164
   ]
165
  },
166
  {
167
   "attachments": {},
168
   "cell_type": "markdown",
169
   "metadata": {},
170
   "source": [
171
    "Not bad, but we do now have the issue of streaming the _entire_ output from the LLM. Because we're using an agent, the LLM is instructed to output the JSON format we can see here so that the agent logic can handle tool usage, multiple \"thinking\" steps, and so on. For example, if we ask a math question we'll see this:"
172
   ]
173
  },
174
  {
175
   "cell_type": "code",
176
   "execution_count": null,
177
   "metadata": {},
178
   "outputs": [],
179
   "source": [
180
    "agent(\"what is the square root of 71?\")"
181
   ]
182
  },
183
  {
184
   "attachments": {},
185
   "cell_type": "markdown",
186
   "metadata": {},
187
   "source": [
188
    "It's interesting to see during development but we'll want to clean this streaming up a little in any actual use-case. For that we can go with two approaches — either we build a custom callback handler, or use a purpose built callback handler from LangChain (as usual, LangChain has something for everything). Let's first try LangChain's purpose-built `FinalStreamingStdOutCallbackHandler`.\n",
189
    "\n",
190
    "We will overwrite the existing `callbacks` attribute found here:"
191
   ]
192
  },
193
  {
194
   "cell_type": "code",
195
   "execution_count": null,
196
   "metadata": {},
197
   "outputs": [],
198
   "source": [
199
    "agent.agent.llm_chain.llm"
200
   ]
201
  },
202
  {
203
   "attachments": {},
204
   "cell_type": "markdown",
205
   "metadata": {},
206
   "source": [
207
    "With the new callback handler:"
208
   ]
209
  },
210
  {
211
   "cell_type": "code",
212
   "execution_count": null,
213
   "metadata": {},
214
   "outputs": [],
215
   "source": [
216
    "from langchain.callbacks.streaming_stdout_final_only import (\n",
217
    "    FinalStreamingStdOutCallbackHandler,\n",
218
    ")\n",
219
    "\n",
220
    "agent.agent.llm_chain.llm.callbacks = [\n",
221
    "    FinalStreamingStdOutCallbackHandler(\n",
222
    "        answer_prefix_tokens=[\"Final\", \"Answer\"]\n",
223
    "    )\n",
224
    "]"
225
   ]
226
  },
227
  {
228
   "attachments": {},
229
   "cell_type": "markdown",
230
   "metadata": {},
231
   "source": [
232
    "Let's try it:"
233
   ]
234
  },
235
  {
236
   "cell_type": "code",
237
   "execution_count": null,
238
   "metadata": {},
239
   "outputs": [],
240
   "source": [
241
    "agent(\"what is the square root of 71?\")"
242
   ]
243
  },
244
  {
245
   "attachments": {},
246
   "cell_type": "markdown",
247
   "metadata": {},
248
   "source": [
249
    "Not quite there, we should really clean up the `answer_prefix_tokens` argument but it is hard to get right. It's generally easier to use a custom callback handler like so:"
250
   ]
251
  },
252
  {
253
   "cell_type": "code",
254
   "execution_count": null,
255
   "metadata": {},
256
   "outputs": [],
257
   "source": [
258
    "import sys\n",
259
    "\n",
260
    "class CallbackHandler(StreamingStdOutCallbackHandler):\n",
261
    "    def __init__(self):\n",
262
    "        self.content: str = \"\"\n",
263
    "        self.final_answer: bool = False\n",
264
    "\n",
265
    "    def on_llm_new_token(self, token: str, **kwargs: any) -> None:\n",
266
    "        self.content += token\n",
267
    "        if \"Final Answer\" in self.content:\n",
268
    "            # now we're in the final answer section, but don't print yet\n",
269
    "            self.final_answer = True\n",
270
    "            self.content = \"\"\n",
271
    "        if self.final_answer:\n",
272
    "            if '\"action_input\": \"' in self.content:\n",
273
    "                if token not in [\"}\"]:\n",
274
    "                    sys.stdout.write(token)  # equal to `print(token, end=\"\")`\n",
275
    "                    sys.stdout.flush()\n",
276
    "\n",
277
    "agent.agent.llm_chain.llm.callbacks = [CallbackHandler()]"
278
   ]
279
  },
280
  {
281
   "attachments": {},
282
   "cell_type": "markdown",
283
   "metadata": {},
284
   "source": [
285
    "Let's try again:"
286
   ]
287
  },
288
  {
289
   "cell_type": "code",
290
   "execution_count": null,
291
   "metadata": {},
292
   "outputs": [],
293
   "source": [
294
    "agent(\"what is the square root of 71?\")"
295
   ]
296
  },
297
  {
298
   "cell_type": "code",
299
   "execution_count": null,
300
   "metadata": {},
301
   "outputs": [],
302
   "source": [
303
    "agent.agent.llm_chain.llm"
304
   ]
305
  },
306
  {
307
   "attachments": {},
308
   "cell_type": "markdown",
309
   "metadata": {},
310
   "source": [
311
    "It isn't perfect, but this is getting better. Now, in most scenarios we're unlikely to simply be printing output to a terminal or notebook. When we want to do something more complex like stream this data through another API, we need to do things differently."
312
   ]
313
  },
314
  {
315
   "attachments": {},
316
   "cell_type": "markdown",
317
   "metadata": {},
318
   "source": [
319
    "## Using FastAPI with Agents"
320
   ]
321
  },
322
  {
323
   "attachments": {},
324
   "cell_type": "markdown",
325
   "metadata": {},
326
   "source": [
327
    "In most cases we'll be placing our LLMs, Agents, etc behind something like an API. Let's add that into the mix and see how we can implement streaming for agents with FastAPI.\n",
328
    "\n",
329
    "First, we'll create a simple `main.py` script to contain our FastAPI logic. You can find it in the same GitHub repo location as this notebook ([here's a link](https://github.com/pinecone-io/examples/blob/langchain-streaming/learn/generation/langchain/handbook/09-langchain-streaming/main.py)).\n",
330
    "\n",
331
    "To run the API, navigate to the directory and run `uvicorn main:app --reload`. Once complete, you can confirm it is running by looking for the 🤙 status in the next cell output:"
332
   ]
333
  },
334
  {
335
   "cell_type": "code",
336
   "execution_count": 14,
337
   "metadata": {},
338
   "outputs": [
339
    {
340
     "data": {
341
      "text/plain": [
342
       "{'status': '🤙'}"
343
      ]
344
     },
345
     "execution_count": 14,
346
     "metadata": {},
347
     "output_type": "execute_result"
348
    }
349
   ],
350
   "source": [
351
    "import requests\n",
352
    "\n",
353
    "res = requests.get(\"http://localhost:8000/health\")\n",
354
    "res.json()"
355
   ]
356
  },
357
  {
358
   "cell_type": "code",
359
   "execution_count": 15,
360
   "metadata": {},
361
   "outputs": [
362
    {
363
     "data": {
364
      "text/plain": [
365
       "<Response [200]>"
366
      ]
367
     },
368
     "execution_count": 15,
369
     "metadata": {},
370
     "output_type": "execute_result"
371
    }
372
   ],
373
   "source": [
374
    "res = requests.get(\"http://localhost:8000/chat\",\n",
375
    "    json={\"text\": \"hello there!\"}\n",
376
    ")\n",
377
    "res"
378
   ]
379
  },
380
  {
381
   "cell_type": "code",
382
   "execution_count": 16,
383
   "metadata": {},
384
   "outputs": [
385
    {
386
     "data": {
387
      "text/plain": [
388
       "{'input': 'hello there!',\n",
389
       " 'chat_history': [],\n",
390
       " 'output': 'Hello! How can I assist you today?'}"
391
      ]
392
     },
393
     "execution_count": 16,
394
     "metadata": {},
395
     "output_type": "execute_result"
396
    }
397
   ],
398
   "source": [
399
    "res.json()"
400
   ]
401
  },
402
  {
403
   "attachments": {},
404
   "cell_type": "markdown",
405
   "metadata": {},
406
   "source": [
407
    "Unlike with our StdOut streaming, we now need to send our tokens to a generator function that feeds those tokens to FastAPI via a `StreamingResponse` object. To handle this we need to use async code, otherwise our generator will not begin emitting anything until _after_ generation is already complete.\n",
408
    "\n",
409
    "The `Queue` is accessed by our callback handler, as as each token is generated, it puts the token into the queue. Our generator function asyncronously checks for new tokens being added to the queue. As soon as the generator sees a token has been added, it gets the token and yields it to our `StreamingResponse`.\n",
410
    "\n",
411
    "To see it in action, we'll define a stream requests function called `get_stream`:"
412
   ]
413
  },
414
  {
415
   "cell_type": "code",
416
   "execution_count": 17,
417
   "metadata": {},
418
   "outputs": [],
419
   "source": [
420
    "def get_stream(query: str):\n",
421
    "    s = requests.Session()\n",
422
    "    with s.get(\n",
423
    "        \"http://localhost:8000/chat\",\n",
424
    "        stream=True,\n",
425
    "        json={\"text\": query}\n",
426
    "    ) as r:\n",
427
    "        for line in r.iter_content():\n",
428
    "            print(line.decode(\"utf-8\"), end=\"\")"
429
   ]
430
  },
431
  {
432
   "cell_type": "code",
433
   "execution_count": 26,
434
   "metadata": {},
435
   "outputs": [
436
    {
437
     "name": "stdout",
438
     "output_type": "stream",
439
     "text": [
440
      " \"Hello! How can I assist you today?\"\n"
441
     ]
442
    }
443
   ],
444
   "source": [
445
    "get_stream(\"hi there!\")"
446
   ]
447
  },
448
  {
449
   "cell_type": "code",
450
   "execution_count": null,
451
   "metadata": {},
452
   "outputs": [],
453
   "source": []
454
  }
455
 ],
456
 "metadata": {
457
  "kernelspec": {
458
   "display_name": "ml",
459
   "language": "python",
460
   "name": "python3"
461
  },
462
  "language_info": {
463
   "codemirror_mode": {
464
    "name": "ipython",
465
    "version": 3
466
   },
467
   "file_extension": ".py",
468
   "mimetype": "text/x-python",
469
   "name": "python",
470
   "nbconvert_exporter": "python",
471
   "pygments_lexer": "ipython3",
472
   "version": "3.9.12"
473
  },
474
  "orig_nbformat": 4
475
 },
476
 "nbformat": 4,
477
 "nbformat_minor": 2
478
}
479

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.