autogen

Форк
0
/
agentchat_planning.ipynb 
827 строк · 36.6 Кб
1
{
2
 "cells": [
3
  {
4
   "attachments": {},
5
   "cell_type": "markdown",
6
   "metadata": {},
7
   "source": [
8
    "<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
9
   ]
10
  },
11
  {
12
   "attachments": {},
13
   "cell_type": "markdown",
14
   "metadata": {
15
    "slideshow": {
16
     "slide_type": "slide"
17
    }
18
   },
19
   "source": [
20
    "# Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent\n",
21
    "\n",
22
    "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
23
    "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
24
    "\n",
25
    "In this notebook, we demonstrate how to use multiple agents to work together and accomplish a task that requires finding info from the web and coding. `AssistantAgent` is an LLM-based agent that can write and debug Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. We further create a planning agent for the assistant agent to consult. The planning agent is a variation of the LLM-based `AssistantAgent` with a different system message.\n",
26
    "\n",
27
    "## Requirements\n",
28
    "\n",
29
    "AutoGen requires `Python>=3.8`. To run this notebook example, please install pyautogen and docker:\n",
30
    "```bash\n",
31
    "pip install pyautogen docker\n",
32
    "```"
33
   ]
34
  },
35
  {
36
   "cell_type": "code",
37
   "execution_count": 1,
38
   "metadata": {
39
    "execution": {
40
     "iopub.execute_input": "2023-02-13T23:40:52.317406Z",
41
     "iopub.status.busy": "2023-02-13T23:40:52.316561Z",
42
     "iopub.status.idle": "2023-02-13T23:40:52.321193Z",
43
     "shell.execute_reply": "2023-02-13T23:40:52.320628Z"
44
    }
45
   },
46
   "outputs": [],
47
   "source": [
48
    "# %pip install \"pyautogen>=0.2.3\" docker"
49
   ]
50
  },
51
  {
52
   "attachments": {},
53
   "cell_type": "markdown",
54
   "metadata": {},
55
   "source": [
56
    "## Set your API Endpoint\n",
57
    "\n",
58
    "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
59
    "\n",
60
    "It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n"
61
   ]
62
  },
63
  {
64
   "cell_type": "code",
65
   "execution_count": 2,
66
   "metadata": {},
67
   "outputs": [],
68
   "source": [
69
    "import autogen\n",
70
    "\n",
71
    "config_list = autogen.config_list_from_json(\n",
72
    "    \"OAI_CONFIG_LIST\",\n",
73
    "    filter_dict={\n",
74
    "        \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
75
    "    },\n",
76
    ")"
77
   ]
78
  },
79
  {
80
   "attachments": {},
81
   "cell_type": "markdown",
82
   "metadata": {},
83
   "source": [
84
    "The config list looks like the following:\n",
85
    "```python\n",
86
    "config_list = [\n",
87
    "    {\n",
88
    "        'model': 'gpt-4',\n",
89
    "        'api_key': '<your OpenAI API key here>',\n",
90
    "    },  # OpenAI API endpoint for gpt-4\n",
91
    "    {\n",
92
    "        'model': 'gpt-4',\n",
93
    "        'api_key': '<your Azure OpenAI API key here>',\n",
94
    "        'base_url': '<your Azure OpenAI API base here>',\n",
95
    "        'api_type': 'azure',\n",
96
    "        'api_version': '2024-02-15-preview',\n",
97
    "    },  # Azure OpenAI API endpoint for gpt-4\n",
98
    "    {\n",
99
    "        'model': 'gpt-4-32k',\n",
100
    "        'api_key': '<your Azure OpenAI API key here>',\n",
101
    "        'base_url': '<your Azure OpenAI API base here>',\n",
102
    "        'api_type': 'azure',\n",
103
    "        'api_version': '2024-02-15-preview',\n",
104
    "    },  # Azure OpenAI API endpoint for gpt-4-32k\n",
105
    "]\n",
106
    "```\n",
107
    "\n",
108
    "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n",
109
    "\n",
110
    "## Construct Agents\n",
111
    "\n",
112
    "We construct the planning agent named \"planner\" and a user proxy agent for the planner named \"planner_user\". We specify `human_input_mode` as \"NEVER\" in the user proxy agent, which will never ask for human feedback. We define `ask_planner` function to send a message to the planner and return the suggestion from the planner."
113
   ]
114
  },
115
  {
116
   "cell_type": "code",
117
   "execution_count": 3,
118
   "metadata": {},
119
   "outputs": [],
120
   "source": [
121
    "planner = autogen.AssistantAgent(\n",
122
    "    name=\"planner\",\n",
123
    "    llm_config={\"config_list\": config_list},\n",
124
    "    # the default system message of the AssistantAgent is overwritten here\n",
125
    "    system_message=\"You are a helpful AI assistant. You suggest coding and reasoning steps for another AI assistant to accomplish a task. Do not suggest concrete code. For any action beyond writing code or reasoning, convert it to a step that can be implemented by writing code. For example, browsing the web can be implemented by writing code that reads and prints the content of a web page. Finally, inspect the execution result. If the plan is not good, suggest a better plan. If the execution is wrong, analyze the error and suggest a fix.\",\n",
126
    ")\n",
127
    "planner_user = autogen.UserProxyAgent(\n",
128
    "    name=\"planner_user\",\n",
129
    "    max_consecutive_auto_reply=0,  # terminate without auto-reply\n",
130
    "    human_input_mode=\"NEVER\",\n",
131
    "    code_execution_config={\n",
132
    "        \"use_docker\": False\n",
133
    "    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
134
    ")\n",
135
    "\n",
136
    "\n",
137
    "def ask_planner(message):\n",
138
    "    planner_user.initiate_chat(planner, message=message)\n",
139
    "    # return the last message received from the planner\n",
140
    "    return planner_user.last_message()[\"content\"]"
141
   ]
142
  },
143
  {
144
   "attachments": {},
145
   "cell_type": "markdown",
146
   "metadata": {},
147
   "source": [
148
    "We construct the assistant agent and the user proxy agent. We specify `human_input_mode` as \"TERMINATE\" in the user proxy agent, which will ask for feedback when it receives a \"TERMINATE\" signal from the assistant agent. We set the `functions` in `AssistantAgent` and `function_map` in `UserProxyAgent` to use the created `ask_planner` function."
149
   ]
150
  },
151
  {
152
   "cell_type": "code",
153
   "execution_count": 4,
154
   "metadata": {},
155
   "outputs": [],
156
   "source": [
157
    "# create an AssistantAgent instance named \"assistant\"\n",
158
    "assistant = autogen.AssistantAgent(\n",
159
    "    name=\"assistant\",\n",
160
    "    llm_config={\n",
161
    "        \"temperature\": 0,\n",
162
    "        \"timeout\": 600,\n",
163
    "        \"cache_seed\": 42,\n",
164
    "        \"config_list\": config_list,\n",
165
    "        \"functions\": [\n",
166
    "            {\n",
167
    "                \"name\": \"ask_planner\",\n",
168
    "                \"description\": \"ask planner to: 1. get a plan for finishing a task, 2. verify the execution result of the plan and potentially suggest new plan.\",\n",
169
    "                \"parameters\": {\n",
170
    "                    \"type\": \"object\",\n",
171
    "                    \"properties\": {\n",
172
    "                        \"message\": {\n",
173
    "                            \"type\": \"string\",\n",
174
    "                            \"description\": \"question to ask planner. Make sure the question include enough context, such as the code and the execution result. The planner does not know the conversation between you and the user, unless you share the conversation with the planner.\",\n",
175
    "                        },\n",
176
    "                    },\n",
177
    "                    \"required\": [\"message\"],\n",
178
    "                },\n",
179
    "            },\n",
180
    "        ],\n",
181
    "    },\n",
182
    ")\n",
183
    "\n",
184
    "# create a UserProxyAgent instance named \"user_proxy\"\n",
185
    "user_proxy = autogen.UserProxyAgent(\n",
186
    "    name=\"user_proxy\",\n",
187
    "    human_input_mode=\"TERMINATE\",\n",
188
    "    max_consecutive_auto_reply=10,\n",
189
    "    # is_termination_msg=lambda x: \"content\" in x and x[\"content\"] is not None and x[\"content\"].rstrip().endswith(\"TERMINATE\"),\n",
190
    "    code_execution_config={\n",
191
    "        \"work_dir\": \"planning\",\n",
192
    "        \"use_docker\": False,\n",
193
    "    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
194
    "    function_map={\"ask_planner\": ask_planner},\n",
195
    ")"
196
   ]
197
  },
198
  {
199
   "attachments": {},
200
   "cell_type": "markdown",
201
   "metadata": {},
202
   "source": [
203
    "## Perform a task\n",
204
    "\n",
205
    "We invoke the `initiate_chat()` method of the user proxy agent to start the conversation. When you run the cell below, you will be prompted to provide feedback after the assistant agent sends a \"TERMINATE\" signal at the end of the message. If you don't provide any feedback (by pressing Enter directly), the conversation will finish. Before the \"TERMINATE\" signal, the user proxy agent will try to execute the code suggested by the assistant agent on behalf of the user."
206
   ]
207
  },
208
  {
209
   "cell_type": "code",
210
   "execution_count": 5,
211
   "metadata": {},
212
   "outputs": [
213
    {
214
     "name": "stdout",
215
     "output_type": "stream",
216
     "text": [
217
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
218
      "\n",
219
      "Suggest a fix to an open good first issue of flaml\n",
220
      "\n",
221
      "--------------------------------------------------------------------------------\n",
222
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
223
      "\n",
224
      "To suggest a fix to an open good first issue of FLAML, we first need to fetch the list of open issues labeled as \"good first issue\" from the FLAML GitHub repository. We can do this using the GitHub API. Here is a Python script that fetches and prints the list of open issues labeled as \"good first issue\" from the FLAML repository.\n",
225
      "\n",
226
      "```python\n",
227
      "# filename: fetch_issues.py\n",
228
      "\n",
229
      "import requests\n",
230
      "import json\n",
231
      "\n",
232
      "def fetch_issues():\n",
233
      "    url = \"https://api.github.com/repos/microsoft/FLAML/issues\"\n",
234
      "    params = {\n",
235
      "        \"state\": \"open\",\n",
236
      "        \"labels\": \"good first issue\"\n",
237
      "    }\n",
238
      "    response = requests.get(url, params=params)\n",
239
      "    issues = json.loads(response.text)\n",
240
      "    for issue in issues:\n",
241
      "        print(f\"Issue #{issue['number']}: {issue['title']}\")\n",
242
      "\n",
243
      "if __name__ == \"__main__\":\n",
244
      "    fetch_issues()\n",
245
      "```\n",
246
      "\n",
247
      "Please run this script to fetch the list of open issues. After that, I can help you analyze one of the issues and suggest a potential fix.\n",
248
      "\n",
249
      "--------------------------------------------------------------------------------\n",
250
      "\u001b[31m\n",
251
      ">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
252
      "\u001b[31m\n",
253
      ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n"
254
     ]
255
    },
256
    {
257
     "name": "stderr",
258
     "output_type": "stream",
259
     "text": [
260
      "execute_code was called without specifying a value for use_docker. Since the python docker package is not available, code will be run natively. Note: this fallback behavior is subject to change\n"
261
     ]
262
    },
263
    {
264
     "name": "stdout",
265
     "output_type": "stream",
266
     "text": [
267
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
268
      "\n",
269
      "exitcode: 0 (execution succeeded)\n",
270
      "Code output: \n",
271
      "Issue #1228: include that `retrain_full = True` does not include the user provided validation data in the docs.\n",
272
      "Issue #1120: use_label_encoder warning with xgboost\n",
273
      "Issue #981: Running flaml[tune] using \"-O\" flag for python interpreter (optimization - disables assertions) crashes\n",
274
      "Issue #903: Conditional parameter flow2 crash\n",
275
      "Issue #884: indentation space\n",
276
      "Issue #882: Check if openml version is required\n",
277
      "Issue #834: Adjust the indent\n",
278
      "Issue #821: pip install flaml FAIL\n",
279
      "Issue #807: Isolate the ensemble part and expose it to users\n",
280
      "Issue #805: how to pass categorical features names or indices to learner\n",
281
      "Issue #785: Flaml/LightGBM - Shouldn't I found better/faster or equal results from FLAML than direct LightGBM?\n",
282
      "Issue #764: Add an announcement of the discord channel\n",
283
      "Issue #748: Documentation about small budget\n",
284
      "Issue #737: Make zero-shot automl more discoverable\n",
285
      "Issue #509: New HCrystalBall release\n",
286
      "Issue #429: samples about conversion to ONNX\n",
287
      "Issue #413: support anomaly detection\n",
288
      "Issue #304: CatBoost Fails with Keyword 'groups'\n",
289
      "\n",
290
      "\n",
291
      "--------------------------------------------------------------------------------\n",
292
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
293
      "\n",
294
      "\u001b[32m***** Suggested function Call: ask_planner *****\u001b[0m\n",
295
      "Arguments: \n",
296
      "{\n",
297
      "  \"message\": \"Here are the open issues labeled as 'good first issue' in the FLAML repository. Please suggest a plan to fix one of these issues. \\n\\n1. Issue #1228: include that `retrain_full = True` does not include the user provided validation data in the docs.\\n2. Issue #1120: use_label_encoder warning with xgboost\\n3. Issue #981: Running flaml[tune] using \\\"-O\\\" flag for python interpreter (optimization - disables assertions) crashes\\n4. Issue #903: Conditional parameter flow2 crash\\n5. Issue #884: indentation space\\n6. Issue #882: Check if openml version is required\\n7. Issue #834: Adjust the indent\\n8. Issue #821: pip install flaml FAIL\\n9. Issue #807: Isolate the ensemble part and expose it to users\\n10. Issue #805: how to pass categorical features names or indices to learner\\n11. Issue #785: Flaml/LightGBM - Shouldn't I found better/faster or equal results from FLAML than direct LightGBM?\\n12. Issue #764: Add an announcement of the discord channel\\n13. Issue #748: Documentation about small budget\\n14. Issue #737: Make zero-shot automl more discoverable\\n15. Issue #509: New HCrystalBall release\\n16. Issue #429: samples about conversion to ONNX\\n17. Issue #413: support anomaly detection\\n18. Issue #304: CatBoost Fails with Keyword 'groups'\"\n",
298
      "}\n",
299
      "\u001b[32m************************************************\u001b[0m\n",
300
      "\n",
301
      "--------------------------------------------------------------------------------\n",
302
      "\u001b[31m\n",
303
      ">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
304
      "\u001b[35m\n",
305
      ">>>>>>>> EXECUTING FUNCTION ask_planner...\u001b[0m\n",
306
      "\u001b[33mplanner_user\u001b[0m (to planner):\n",
307
      "\n",
308
      "Here are the open issues labeled as 'good first issue' in the FLAML repository. Please suggest a plan to fix one of these issues. \n",
309
      "\n",
310
      "1. Issue #1228: include that `retrain_full = True` does not include the user provided validation data in the docs.\n",
311
      "2. Issue #1120: use_label_encoder warning with xgboost\n",
312
      "3. Issue #981: Running flaml[tune] using \"-O\" flag for python interpreter (optimization - disables assertions) crashes\n",
313
      "4. Issue #903: Conditional parameter flow2 crash\n",
314
      "5. Issue #884: indentation space\n",
315
      "6. Issue #882: Check if openml version is required\n",
316
      "7. Issue #834: Adjust the indent\n",
317
      "8. Issue #821: pip install flaml FAIL\n",
318
      "9. Issue #807: Isolate the ensemble part and expose it to users\n",
319
      "10. Issue #805: how to pass categorical features names or indices to learner\n",
320
      "11. Issue #785: Flaml/LightGBM - Shouldn't I found better/faster or equal results from FLAML than direct LightGBM?\n",
321
      "12. Issue #764: Add an announcement of the discord channel\n",
322
      "13. Issue #748: Documentation about small budget\n",
323
      "14. Issue #737: Make zero-shot automl more discoverable\n",
324
      "15. Issue #509: New HCrystalBall release\n",
325
      "16. Issue #429: samples about conversion to ONNX\n",
326
      "17. Issue #413: support anomaly detection\n",
327
      "18. Issue #304: CatBoost Fails with Keyword 'groups'\n",
328
      "\n",
329
      "--------------------------------------------------------------------------------\n",
330
      "\u001b[33mplanner\u001b[0m (to planner_user):\n",
331
      "\n",
332
      "Here are the steps to fix the first issue (Issue #1228: include that `retrain_full = True` does not include the user provided validation data in the docs):\n",
333
      "\n",
334
      "1. **Identify where the issue is**: Understand the context of `retrain_full = True` within FLAML. Figure out how it works - whether it really ignores the user-provided validation data or not.\n",
335
      "\n",
336
      "2. **Update the documentation**: Based on your understanding, update the corresponding part of the documentation to include a note or clarification about this fact. You can use Markdown language to do the documentation. The note can be written in a clear and concise way.\n",
337
      "\n",
338
      "3. **Commit and Push Code**: After making the necessary changes, commit and push the changes to your repository. Make sure to include a detailed commit message to make it clear what changes were made.\n",
339
      "\n",
340
      "4. **Submit a Pull Request (PR)**: Then submit a pull request to the FLAML repository. In the message of the PR, be sure to reference the issue number (i.e. #1228), to automatically link it.\n",
341
      "\n",
342
      "5. **Wait for Review**: Maintainers will then review your code. They may suggest changes or improvements, and once they're satisfied, they'll merge your changes into the main repository.\n",
343
      "\n",
344
      "6. **Inspect the Results**: After the pull request is merged, verify that the changes have been successfully incorporated and the documentation correctly reflects the behavior of `retrain_full = True`.\n",
345
      "\n",
346
      "Please note, this plan does not involve writing code in any programming language. Instead, it focuses on understanding the library, editing Markdown files, and using Git and GitHub functionalities appropriately. \n",
347
      "\n",
348
      "Should any of your actions result in an error, this could be due to multiple reasons such as misinterpretation of the behaviour of `retrain_full = True`, errors in the markdown syntax, among others. You will need to debug the error based on the specific error message and review your changes. After making corrections, you should commit and push your changes and verify that the error has been fixed.\n",
349
      "\n",
350
      "--------------------------------------------------------------------------------\n",
351
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
352
      "\n",
353
      "\u001b[32m***** Response from calling function \"ask_planner\" *****\u001b[0m\n",
354
      "Here are the steps to fix the first issue (Issue #1228: include that `retrain_full = True` does not include the user provided validation data in the docs):\n",
355
      "\n",
356
      "1. **Identify where the issue is**: Understand the context of `retrain_full = True` within FLAML. Figure out how it works - whether it really ignores the user-provided validation data or not.\n",
357
      "\n",
358
      "2. **Update the documentation**: Based on your understanding, update the corresponding part of the documentation to include a note or clarification about this fact. You can use Markdown language to do the documentation. The note can be written in a clear and concise way.\n",
359
      "\n",
360
      "3. **Commit and Push Code**: After making the necessary changes, commit and push the changes to your repository. Make sure to include a detailed commit message to make it clear what changes were made.\n",
361
      "\n",
362
      "4. **Submit a Pull Request (PR)**: Then submit a pull request to the FLAML repository. In the message of the PR, be sure to reference the issue number (i.e. #1228), to automatically link it.\n",
363
      "\n",
364
      "5. **Wait for Review**: Maintainers will then review your code. They may suggest changes or improvements, and once they're satisfied, they'll merge your changes into the main repository.\n",
365
      "\n",
366
      "6. **Inspect the Results**: After the pull request is merged, verify that the changes have been successfully incorporated and the documentation correctly reflects the behavior of `retrain_full = True`.\n",
367
      "\n",
368
      "Please note, this plan does not involve writing code in any programming language. Instead, it focuses on understanding the library, editing Markdown files, and using Git and GitHub functionalities appropriately. \n",
369
      "\n",
370
      "Should any of your actions result in an error, this could be due to multiple reasons such as misinterpretation of the behaviour of `retrain_full = True`, errors in the markdown syntax, among others. You will need to debug the error based on the specific error message and review your changes. After making corrections, you should commit and push your changes and verify that the error has been fixed.\n",
371
      "\u001b[32m********************************************************\u001b[0m\n",
372
      "\n",
373
      "--------------------------------------------------------------------------------\n",
374
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
375
      "\n",
376
      "To fix the issue #1228, you need to follow these steps:\n",
377
      "\n",
378
      "1. **Identify where the issue is**: Understand the context of `retrain_full = True` within FLAML. Figure out how it works - whether it really ignores the user-provided validation data or not.\n",
379
      "\n",
380
      "2. **Update the documentation**: Based on your understanding, update the corresponding part of the documentation to include a note or clarification about this fact. You can use Markdown language to do the documentation. The note can be written in a clear and concise way.\n",
381
      "\n",
382
      "3. **Commit and Push Code**: After making the necessary changes, commit and push the changes to your repository. Make sure to include a detailed commit message to make it clear what changes were made.\n",
383
      "\n",
384
      "4. **Submit a Pull Request (PR)**: Then submit a pull request to the FLAML repository. In the message of the PR, be sure to reference the issue number (i.e. #1228), to automatically link it.\n",
385
      "\n",
386
      "5. **Wait for Review**: Maintainers will then review your code. They may suggest changes or improvements, and once they're satisfied, they'll merge your changes into the main repository.\n",
387
      "\n",
388
      "6. **Inspect the Results**: After the pull request is merged, verify that the changes have been successfully incorporated and the documentation correctly reflects the behavior of `retrain_full = True`.\n",
389
      "\n",
390
      "Please note, this plan does not involve writing code in any programming language. Instead, it focuses on understanding the library, editing Markdown files, and using Git and GitHub functionalities appropriately. \n",
391
      "\n",
392
      "Should any of your actions result in an error, this could be due to multiple reasons such as misinterpretation of the behaviour of `retrain_full = True`, errors in the markdown syntax, among others. You will need to debug the error based on the specific error message and review your changes. After making corrections, you should commit and push your changes and verify that the error has been fixed.\n",
393
      "\n",
394
      "TERMINATE\n",
395
      "\n",
396
      "--------------------------------------------------------------------------------\n",
397
      "\u001b[31m\n",
398
      ">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
399
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
400
      "\n",
401
      "\n",
402
      "\n",
403
      "--------------------------------------------------------------------------------\n",
404
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
405
      "\n",
406
      "TERMINATE\n",
407
      "\n",
408
      "--------------------------------------------------------------------------------\n",
409
      "\u001b[31m\n",
410
      ">>>>>>>> NO HUMAN INPUT RECEIVED.\u001b[0m\n"
411
     ]
412
    }
413
   ],
414
   "source": [
415
    "# the assistant receives a message from the user, which contains the task description\n",
416
    "user_proxy.initiate_chat(\n",
417
    "    assistant,\n",
418
    "    message=\"\"\"Suggest a fix to an open good first issue of flaml\"\"\",\n",
419
    ")"
420
   ]
421
  },
422
  {
423
   "attachments": {},
424
   "cell_type": "markdown",
425
   "metadata": {},
426
   "source": [
427
    "When the assistant needs to consult the planner, it suggests a function call to `ask_planner`. When this happens, a line like the following will be displayed:\n",
428
    "\n",
429
    "***** Suggested function Call: ask_planner *****\n"
430
   ]
431
  }
432
 ],
433
 "metadata": {
434
  "kernelspec": {
435
   "display_name": "Python 3",
436
   "language": "python",
437
   "name": "python3"
438
  },
439
  "language_info": {
440
   "codemirror_mode": {
441
    "name": "ipython",
442
    "version": 3
443
   },
444
   "file_extension": ".py",
445
   "mimetype": "text/x-python",
446
   "name": "python",
447
   "nbconvert_exporter": "python",
448
   "pygments_lexer": "ipython3",
449
   "version": "3.11.4"
450
  },
451
  "vscode": {
452
   "interpreter": {
453
    "hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
454
   }
455
  },
456
  "widgets": {
457
   "application/vnd.jupyter.widget-state+json": {
458
    "state": {
459
     "2d910cfd2d2a4fc49fc30fbbdc5576a7": {
460
      "model_module": "@jupyter-widgets/base",
461
      "model_module_version": "2.0.0",
462
      "model_name": "LayoutModel",
463
      "state": {
464
       "_model_module": "@jupyter-widgets/base",
465
       "_model_module_version": "2.0.0",
466
       "_model_name": "LayoutModel",
467
       "_view_count": null,
468
       "_view_module": "@jupyter-widgets/base",
469
       "_view_module_version": "2.0.0",
470
       "_view_name": "LayoutView",
471
       "align_content": null,
472
       "align_items": null,
473
       "align_self": null,
474
       "border_bottom": null,
475
       "border_left": null,
476
       "border_right": null,
477
       "border_top": null,
478
       "bottom": null,
479
       "display": null,
480
       "flex": null,
481
       "flex_flow": null,
482
       "grid_area": null,
483
       "grid_auto_columns": null,
484
       "grid_auto_flow": null,
485
       "grid_auto_rows": null,
486
       "grid_column": null,
487
       "grid_gap": null,
488
       "grid_row": null,
489
       "grid_template_areas": null,
490
       "grid_template_columns": null,
491
       "grid_template_rows": null,
492
       "height": null,
493
       "justify_content": null,
494
       "justify_items": null,
495
       "left": null,
496
       "margin": null,
497
       "max_height": null,
498
       "max_width": null,
499
       "min_height": null,
500
       "min_width": null,
501
       "object_fit": null,
502
       "object_position": null,
503
       "order": null,
504
       "overflow": null,
505
       "padding": null,
506
       "right": null,
507
       "top": null,
508
       "visibility": null,
509
       "width": null
510
      }
511
     },
512
     "454146d0f7224f038689031002906e6f": {
513
      "model_module": "@jupyter-widgets/controls",
514
      "model_module_version": "2.0.0",
515
      "model_name": "HBoxModel",
516
      "state": {
517
       "_dom_classes": [],
518
       "_model_module": "@jupyter-widgets/controls",
519
       "_model_module_version": "2.0.0",
520
       "_model_name": "HBoxModel",
521
       "_view_count": null,
522
       "_view_module": "@jupyter-widgets/controls",
523
       "_view_module_version": "2.0.0",
524
       "_view_name": "HBoxView",
525
       "box_style": "",
526
       "children": [
527
        "IPY_MODEL_e4ae2b6f5a974fd4bafb6abb9d12ff26",
528
        "IPY_MODEL_577e1e3cc4db4942b0883577b3b52755",
529
        "IPY_MODEL_b40bdfb1ac1d4cffb7cefcb870c64d45"
530
       ],
531
       "layout": "IPY_MODEL_dc83c7bff2f241309537a8119dfc7555",
532
       "tabbable": null,
533
       "tooltip": null
534
      }
535
     },
536
     "577e1e3cc4db4942b0883577b3b52755": {
537
      "model_module": "@jupyter-widgets/controls",
538
      "model_module_version": "2.0.0",
539
      "model_name": "FloatProgressModel",
540
      "state": {
541
       "_dom_classes": [],
542
       "_model_module": "@jupyter-widgets/controls",
543
       "_model_module_version": "2.0.0",
544
       "_model_name": "FloatProgressModel",
545
       "_view_count": null,
546
       "_view_module": "@jupyter-widgets/controls",
547
       "_view_module_version": "2.0.0",
548
       "_view_name": "ProgressView",
549
       "bar_style": "success",
550
       "description": "",
551
       "description_allow_html": false,
552
       "layout": "IPY_MODEL_2d910cfd2d2a4fc49fc30fbbdc5576a7",
553
       "max": 1,
554
       "min": 0,
555
       "orientation": "horizontal",
556
       "style": "IPY_MODEL_74a6ba0c3cbc4051be0a83e152fe1e62",
557
       "tabbable": null,
558
       "tooltip": null,
559
       "value": 1
560
      }
561
     },
562
     "6086462a12d54bafa59d3c4566f06cb2": {
563
      "model_module": "@jupyter-widgets/base",
564
      "model_module_version": "2.0.0",
565
      "model_name": "LayoutModel",
566
      "state": {
567
       "_model_module": "@jupyter-widgets/base",
568
       "_model_module_version": "2.0.0",
569
       "_model_name": "LayoutModel",
570
       "_view_count": null,
571
       "_view_module": "@jupyter-widgets/base",
572
       "_view_module_version": "2.0.0",
573
       "_view_name": "LayoutView",
574
       "align_content": null,
575
       "align_items": null,
576
       "align_self": null,
577
       "border_bottom": null,
578
       "border_left": null,
579
       "border_right": null,
580
       "border_top": null,
581
       "bottom": null,
582
       "display": null,
583
       "flex": null,
584
       "flex_flow": null,
585
       "grid_area": null,
586
       "grid_auto_columns": null,
587
       "grid_auto_flow": null,
588
       "grid_auto_rows": null,
589
       "grid_column": null,
590
       "grid_gap": null,
591
       "grid_row": null,
592
       "grid_template_areas": null,
593
       "grid_template_columns": null,
594
       "grid_template_rows": null,
595
       "height": null,
596
       "justify_content": null,
597
       "justify_items": null,
598
       "left": null,
599
       "margin": null,
600
       "max_height": null,
601
       "max_width": null,
602
       "min_height": null,
603
       "min_width": null,
604
       "object_fit": null,
605
       "object_position": null,
606
       "order": null,
607
       "overflow": null,
608
       "padding": null,
609
       "right": null,
610
       "top": null,
611
       "visibility": null,
612
       "width": null
613
      }
614
     },
615
     "74a6ba0c3cbc4051be0a83e152fe1e62": {
616
      "model_module": "@jupyter-widgets/controls",
617
      "model_module_version": "2.0.0",
618
      "model_name": "ProgressStyleModel",
619
      "state": {
620
       "_model_module": "@jupyter-widgets/controls",
621
       "_model_module_version": "2.0.0",
622
       "_model_name": "ProgressStyleModel",
623
       "_view_count": null,
624
       "_view_module": "@jupyter-widgets/base",
625
       "_view_module_version": "2.0.0",
626
       "_view_name": "StyleView",
627
       "bar_color": null,
628
       "description_width": ""
629
      }
630
     },
631
     "7d3f3d9e15894d05a4d188ff4f466554": {
632
      "model_module": "@jupyter-widgets/controls",
633
      "model_module_version": "2.0.0",
634
      "model_name": "HTMLStyleModel",
635
      "state": {
636
       "_model_module": "@jupyter-widgets/controls",
637
       "_model_module_version": "2.0.0",
638
       "_model_name": "HTMLStyleModel",
639
       "_view_count": null,
640
       "_view_module": "@jupyter-widgets/base",
641
       "_view_module_version": "2.0.0",
642
       "_view_name": "StyleView",
643
       "background": null,
644
       "description_width": "",
645
       "font_size": null,
646
       "text_color": null
647
      }
648
     },
649
     "b40bdfb1ac1d4cffb7cefcb870c64d45": {
650
      "model_module": "@jupyter-widgets/controls",
651
      "model_module_version": "2.0.0",
652
      "model_name": "HTMLModel",
653
      "state": {
654
       "_dom_classes": [],
655
       "_model_module": "@jupyter-widgets/controls",
656
       "_model_module_version": "2.0.0",
657
       "_model_name": "HTMLModel",
658
       "_view_count": null,
659
       "_view_module": "@jupyter-widgets/controls",
660
       "_view_module_version": "2.0.0",
661
       "_view_name": "HTMLView",
662
       "description": "",
663
       "description_allow_html": false,
664
       "layout": "IPY_MODEL_f1355871cc6f4dd4b50d9df5af20e5c8",
665
       "placeholder": "​",
666
       "style": "IPY_MODEL_ca245376fd9f4354af6b2befe4af4466",
667
       "tabbable": null,
668
       "tooltip": null,
669
       "value": " 1/1 [00:00&lt;00:00, 44.69it/s]"
670
      }
671
     },
672
     "ca245376fd9f4354af6b2befe4af4466": {
673
      "model_module": "@jupyter-widgets/controls",
674
      "model_module_version": "2.0.0",
675
      "model_name": "HTMLStyleModel",
676
      "state": {
677
       "_model_module": "@jupyter-widgets/controls",
678
       "_model_module_version": "2.0.0",
679
       "_model_name": "HTMLStyleModel",
680
       "_view_count": null,
681
       "_view_module": "@jupyter-widgets/base",
682
       "_view_module_version": "2.0.0",
683
       "_view_name": "StyleView",
684
       "background": null,
685
       "description_width": "",
686
       "font_size": null,
687
       "text_color": null
688
      }
689
     },
690
     "dc83c7bff2f241309537a8119dfc7555": {
691
      "model_module": "@jupyter-widgets/base",
692
      "model_module_version": "2.0.0",
693
      "model_name": "LayoutModel",
694
      "state": {
695
       "_model_module": "@jupyter-widgets/base",
696
       "_model_module_version": "2.0.0",
697
       "_model_name": "LayoutModel",
698
       "_view_count": null,
699
       "_view_module": "@jupyter-widgets/base",
700
       "_view_module_version": "2.0.0",
701
       "_view_name": "LayoutView",
702
       "align_content": null,
703
       "align_items": null,
704
       "align_self": null,
705
       "border_bottom": null,
706
       "border_left": null,
707
       "border_right": null,
708
       "border_top": null,
709
       "bottom": null,
710
       "display": null,
711
       "flex": null,
712
       "flex_flow": null,
713
       "grid_area": null,
714
       "grid_auto_columns": null,
715
       "grid_auto_flow": null,
716
       "grid_auto_rows": null,
717
       "grid_column": null,
718
       "grid_gap": null,
719
       "grid_row": null,
720
       "grid_template_areas": null,
721
       "grid_template_columns": null,
722
       "grid_template_rows": null,
723
       "height": null,
724
       "justify_content": null,
725
       "justify_items": null,
726
       "left": null,
727
       "margin": null,
728
       "max_height": null,
729
       "max_width": null,
730
       "min_height": null,
731
       "min_width": null,
732
       "object_fit": null,
733
       "object_position": null,
734
       "order": null,
735
       "overflow": null,
736
       "padding": null,
737
       "right": null,
738
       "top": null,
739
       "visibility": null,
740
       "width": null
741
      }
742
     },
743
     "e4ae2b6f5a974fd4bafb6abb9d12ff26": {
744
      "model_module": "@jupyter-widgets/controls",
745
      "model_module_version": "2.0.0",
746
      "model_name": "HTMLModel",
747
      "state": {
748
       "_dom_classes": [],
749
       "_model_module": "@jupyter-widgets/controls",
750
       "_model_module_version": "2.0.0",
751
       "_model_name": "HTMLModel",
752
       "_view_count": null,
753
       "_view_module": "@jupyter-widgets/controls",
754
       "_view_module_version": "2.0.0",
755
       "_view_name": "HTMLView",
756
       "description": "",
757
       "description_allow_html": false,
758
       "layout": "IPY_MODEL_6086462a12d54bafa59d3c4566f06cb2",
759
       "placeholder": "​",
760
       "style": "IPY_MODEL_7d3f3d9e15894d05a4d188ff4f466554",
761
       "tabbable": null,
762
       "tooltip": null,
763
       "value": "100%"
764
      }
765
     },
766
     "f1355871cc6f4dd4b50d9df5af20e5c8": {
767
      "model_module": "@jupyter-widgets/base",
768
      "model_module_version": "2.0.0",
769
      "model_name": "LayoutModel",
770
      "state": {
771
       "_model_module": "@jupyter-widgets/base",
772
       "_model_module_version": "2.0.0",
773
       "_model_name": "LayoutModel",
774
       "_view_count": null,
775
       "_view_module": "@jupyter-widgets/base",
776
       "_view_module_version": "2.0.0",
777
       "_view_name": "LayoutView",
778
       "align_content": null,
779
       "align_items": null,
780
       "align_self": null,
781
       "border_bottom": null,
782
       "border_left": null,
783
       "border_right": null,
784
       "border_top": null,
785
       "bottom": null,
786
       "display": null,
787
       "flex": null,
788
       "flex_flow": null,
789
       "grid_area": null,
790
       "grid_auto_columns": null,
791
       "grid_auto_flow": null,
792
       "grid_auto_rows": null,
793
       "grid_column": null,
794
       "grid_gap": null,
795
       "grid_row": null,
796
       "grid_template_areas": null,
797
       "grid_template_columns": null,
798
       "grid_template_rows": null,
799
       "height": null,
800
       "justify_content": null,
801
       "justify_items": null,
802
       "left": null,
803
       "margin": null,
804
       "max_height": null,
805
       "max_width": null,
806
       "min_height": null,
807
       "min_width": null,
808
       "object_fit": null,
809
       "object_position": null,
810
       "order": null,
811
       "overflow": null,
812
       "padding": null,
813
       "right": null,
814
       "top": null,
815
       "visibility": null,
816
       "width": null
817
      }
818
     }
819
    },
820
    "version_major": 2,
821
    "version_minor": 0
822
   }
823
  }
824
 },
825
 "nbformat": 4,
826
 "nbformat_minor": 2
827
}
828

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.