openai-cookbook
340 строк · 10.9 Кб
1{
2"cells": [
3{
4"attachments": {},
5"cell_type": "markdown",
6"metadata": {},
7"source": [
8"# Azure chat completions example\n",
9"\n",
10"This example will cover chat completions using the Azure OpenAI service. It also includes information on content filtering."
11]
12},
13{
14"cell_type": "markdown",
15"metadata": {},
16"source": [
17"## Setup\n",
18"\n",
19"First, we install the necessary dependencies and import the libraries we will be using."
20]
21},
22{
23"cell_type": "code",
24"execution_count": null,
25"metadata": {},
26"outputs": [],
27"source": [
28"! pip install \"openai>=1.0.0,<2.0.0\"\n",
29"! pip install python-dotenv"
30]
31},
32{
33"cell_type": "code",
34"execution_count": null,
35"metadata": {},
36"outputs": [],
37"source": [
38"import os\n",
39"import openai\n",
40"import dotenv\n",
41"\n",
42"dotenv.load_dotenv()"
43]
44},
45{
46"cell_type": "markdown",
47"metadata": {},
48"source": [
49"### Authentication\n",
50"\n",
51"The Azure OpenAI service supports multiple authentication mechanisms that include API keys and Azure Active Directory token credentials."
52]
53},
54{
55"cell_type": "code",
56"execution_count": 2,
57"metadata": {},
58"outputs": [],
59"source": [
60"use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory"
61]
62},
63{
64"cell_type": "markdown",
65"metadata": {},
66"source": [
67"#### Authentication using API key\n",
68"\n",
69"To set up the OpenAI SDK to use an *Azure API Key*, we need to set `api_key` to a key associated with your endpoint (you can find this key in *\"Keys and Endpoints\"* under *\"Resource Management\"* in the [Azure Portal](https://portal.azure.com)). You'll also find the endpoint for your resource here."
70]
71},
72{
73"cell_type": "code",
74"execution_count": 13,
75"metadata": {},
76"outputs": [],
77"source": [
78"if not use_azure_active_directory:\n",
79" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
80" api_key = os.environ[\"AZURE_OPENAI_API_KEY\"]\n",
81"\n",
82" client = openai.AzureOpenAI(\n",
83" azure_endpoint=endpoint,\n",
84" api_key=api_key,\n",
85" api_version=\"2023-09-01-preview\"\n",
86" )"
87]
88},
89{
90"cell_type": "markdown",
91"metadata": {},
92"source": [
93"#### Authentication using Azure Active Directory\n",
94"Let's now see how we can autheticate via Azure Active Directory. We'll start by installing the `azure-identity` library. This library will provide the token credentials we need to authenticate and help us build a token credential provider through the `get_bearer_token_provider` helper function. It's recommended to use `get_bearer_token_provider` over providing a static token to `AzureOpenAI` because this API will automatically cache and refresh tokens for you. \n",
95"\n",
96"For more information on how to set up Azure Active Directory authentication with Azure OpenAI, see the [documentation](https://learn.microsoft.com/azure/ai-services/openai/how-to/managed-identity)."
97]
98},
99{
100"cell_type": "code",
101"execution_count": null,
102"metadata": {},
103"outputs": [],
104"source": [
105"! pip install \"azure-identity>=1.15.0\""
106]
107},
108{
109"cell_type": "code",
110"execution_count": 5,
111"metadata": {},
112"outputs": [],
113"source": [
114"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
115"\n",
116"if use_azure_active_directory:\n",
117" endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
118"\n",
119" client = openai.AzureOpenAI(\n",
120" azure_endpoint=endpoint,\n",
121" azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\"),\n",
122" api_version=\"2023-09-01-preview\"\n",
123" )"
124]
125},
126{
127"cell_type": "markdown",
128"metadata": {},
129"source": [
130"> Note: the AzureOpenAI infers the following arguments from their corresponding environment variables if they are not provided:\n",
131"\n",
132"- `api_key` from `AZURE_OPENAI_API_KEY`\n",
133"- `azure_ad_token` from `AZURE_OPENAI_AD_TOKEN`\n",
134"- `api_version` from `OPENAI_API_VERSION`\n",
135"- `azure_endpoint` from `AZURE_OPENAI_ENDPOINT`\n"
136]
137},
138{
139"attachments": {},
140"cell_type": "markdown",
141"metadata": {},
142"source": [
143"## Deployments\n",
144"\n",
145"In this section we are going to create a deployment of a GPT model that we can use to create chat completions."
146]
147},
148{
149"attachments": {},
150"cell_type": "markdown",
151"metadata": {},
152"source": [
153"### Deployments: Create in the Azure OpenAI Studio\n",
154"Let's deploy a model to use with chat completions. Go to https://portal.azure.com, find your Azure OpenAI resource, and then navigate to the Azure OpenAI Studio. Click on the \"Deployments\" tab and then create a deployment for the model you want to use for chat completions. The deployment name that you give the model will be used in the code below."
155]
156},
157{
158"cell_type": "code",
159"execution_count": 4,
160"metadata": {},
161"outputs": [],
162"source": [
163"deployment = \"\" # Fill in the deployment name from the portal here"
164]
165},
166{
167"attachments": {},
168"cell_type": "markdown",
169"metadata": {},
170"source": [
171"## Create chat completions\n",
172"\n",
173"Now let's create a chat completion using the client we built."
174]
175},
176{
177"cell_type": "code",
178"execution_count": null,
179"metadata": {},
180"outputs": [],
181"source": [
182"# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create\n",
183"response = client.chat.completions.create(\n",
184" model=deployment,\n",
185" messages=[\n",
186" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
187" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
188" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
189" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
190" ],\n",
191" temperature=0,\n",
192")\n",
193"\n",
194"print(f\"{response.choices[0].message.role}: {response.choices[0].message.content}\")"
195]
196},
197{
198"attachments": {},
199"cell_type": "markdown",
200"metadata": {},
201"source": [
202"### Create a streaming chat completion\n",
203"\n",
204"We can also stream the response."
205]
206},
207{
208"cell_type": "code",
209"execution_count": null,
210"metadata": {},
211"outputs": [],
212"source": [
213"response = client.chat.completions.create(\n",
214" model=deployment,\n",
215" messages=[\n",
216" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
217" {\"role\": \"user\", \"content\": \"Knock knock.\"},\n",
218" {\"role\": \"assistant\", \"content\": \"Who's there?\"},\n",
219" {\"role\": \"user\", \"content\": \"Orange.\"},\n",
220" ],\n",
221" temperature=0,\n",
222" stream=True\n",
223")\n",
224"\n",
225"for chunk in response:\n",
226" if len(chunk.choices) > 0:\n",
227" delta = chunk.choices[0].delta\n",
228"\n",
229" if delta.role:\n",
230" print(delta.role + \": \", end=\"\", flush=True)\n",
231" if delta.content:\n",
232" print(delta.content, end=\"\", flush=True)"
233]
234},
235{
236"cell_type": "markdown",
237"metadata": {},
238"source": [
239"### Content filtering\n",
240"\n",
241"Azure OpenAI service includes content filtering of prompts and completion responses. You can learn more about content filtering and how to configure it [here](https://learn.microsoft.com/azure/ai-services/openai/concepts/content-filter).\n",
242"\n",
243"If the prompt is flagged by the content filter, the library will raise a `BadRequestError` exception with a `content_filter` error code. Otherwise, you can access the `prompt_filter_results` and `content_filter_results` on the response to see the results of the content filtering and what categories were flagged."
244]
245},
246{
247"cell_type": "markdown",
248"metadata": {},
249"source": [
250"#### Prompt flagged by content filter"
251]
252},
253{
254"cell_type": "code",
255"execution_count": null,
256"metadata": {},
257"outputs": [],
258"source": [
259"import json\n",
260"\n",
261"messages = [\n",
262" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
263" {\"role\": \"user\", \"content\": \"<text violating the content policy>\"}\n",
264"]\n",
265"\n",
266"try:\n",
267" completion = client.chat.completions.create(\n",
268" messages=messages,\n",
269" model=deployment,\n",
270" )\n",
271"except openai.BadRequestError as e:\n",
272" err = json.loads(e.response.text)\n",
273" if err[\"error\"][\"code\"] == \"content_filter\":\n",
274" print(\"Content filter triggered!\")\n",
275" content_filter_result = err[\"error\"][\"innererror\"][\"content_filter_result\"]\n",
276" for category, details in content_filter_result.items():\n",
277" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
278]
279},
280{
281"cell_type": "markdown",
282"metadata": {},
283"source": [
284"### Checking the result of the content filter"
285]
286},
287{
288"cell_type": "code",
289"execution_count": null,
290"metadata": {},
291"outputs": [],
292"source": [
293"messages = [\n",
294" {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
295" {\"role\": \"user\", \"content\": \"What's the biggest city in Washington?\"}\n",
296"]\n",
297"\n",
298"completion = client.chat.completions.create(\n",
299" messages=messages,\n",
300" model=deployment,\n",
301")\n",
302"print(f\"Answer: {completion.choices[0].message.content}\")\n",
303"\n",
304"# prompt content filter result in \"model_extra\" for azure\n",
305"prompt_filter_result = completion.model_extra[\"prompt_filter_results\"][0][\"content_filter_results\"]\n",
306"print(\"\\nPrompt content filter results:\")\n",
307"for category, details in prompt_filter_result.items():\n",
308" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")\n",
309"\n",
310"# completion content filter result\n",
311"print(\"\\nCompletion content filter results:\")\n",
312"completion_filter_result = completion.choices[0].model_extra[\"content_filter_results\"]\n",
313"for category, details in completion_filter_result.items():\n",
314" print(f\"{category}:\\n filtered={details['filtered']}\\n severity={details['severity']}\")"
315]
316}
317],
318"metadata": {
319"kernelspec": {
320"display_name": "Python 3",
321"language": "python",
322"name": "python3"
323},
324"language_info": {
325"codemirror_mode": {
326"name": "ipython",
327"version": 3
328},
329"file_extension": ".py",
330"mimetype": "text/x-python",
331"name": "python",
332"nbconvert_exporter": "python",
333"pygments_lexer": "ipython3",
334"version": "3.10.0"
335},
336"orig_nbformat": 4
337},
338"nbformat": 4,
339"nbformat_minor": 2
340}
341