promptflow
..
/
examples
README.md
Promptflow examples
Get started
Install dependencies
- Bootstrap your python environment.
- e.g: create a new conda environment.
conda create -n pf-examples python=3.9
. - install required packages in python environment :
pip install -r requirements.txt
- show installed sdk:
pip show promptflow
- show installed sdk:
- e.g: create a new conda environment.
Quick start
path | status | description |
---|---|---|
quickstart.ipynb | A quickstart tutorial to run a flow and evaluate it. | |
quickstart-azure.ipynb | A quickstart tutorial to run a flow in Azure AI and evaluate it. |
CLI examples
Tutorials (tutorials)
path | status | description |
---|---|---|
chat-with-pdf | Retrieval Augmented Generation (or RAG) has become a prevalent pattern to build intelligent application with Large Language Models (or LLMs) since it can infuse external knowledge into the model, which is not trained with those up-to-date or proprietary information | |
azure-app-service | This example demos how to deploy a flow using Azure App Service | |
create-service-with-flow | This example shows how to create a simple service with flow | |
distribute-flow-as-executable-app | This example demos how to package flow as a executable app | |
docker | This example demos how to deploy flow as a docker app | |
kubernetes | This example demos how to deploy flow as a Kubernetes app | |
promptflow-quality-improvement | This tutorial is designed to enhance your understanding of improving flow quality through prompt tuning and evaluation | |
tracing | Prompt flow provides the tracing feature to capture and visualize the internal execution details for all flows |
Prompty (prompty)
path | status | description |
---|---|---|
basic | A basic prompt that uses the chat API to answer questions, with connection configured using environment variables | |
chat-basic | A prompt that uses the chat API to answer questions with chat history, leveraging promptflow connection | |
eval-apology | A prompt that determines whether a chat conversation contains an apology from the assistant | |
eval-basic | Basic evaluator prompt for QA scenario | |
format-output | A few examples that demos different prompty response format like text, json_object, and how to enable stream output |
Flex Flows (flex-flows)
path | status | description |
---|---|---|
basic | A basic standard flow define using function entry that calls Azure OpenAI with connection info stored in environment variables | |
chat-async-stream | A chat flow defined using async class entry that return output in stream mode | |
chat-basic | A basic chat flow defined using class entry | |
chat-minimal | A chat flow defined using function with minimal code | |
chat-stream | A chat flow defined using class entry that return output in stream mode | |
chat-with-functions | This flow covers how to use the LLM chat API in combination with external functions to extend the capabilities of GPT models | |
eval-checklist | A example flow defined using class entry which demos how to evaluate the answer pass user specified check list | |
eval-code-quality | A example flow defined using class based entry which leverages model config to evaluate the quality of code snippet | |
eval-criteria-with-langchain | A example flow of converting LangChain criteria evaluator application to flex flow |
Flows (flows)
Standard flows
path | status | description |
---|---|---|
autonomous-agent | This is a flow showcasing how to construct a AutoGPT agent with promptflow to autonomously figures out how to apply the given functions to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors, actors, and more in this sample | |
basic | A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables | |
basic-with-builtin-llm | A basic standard flow that calls Azure OpenAI with builtin llm tool | |
basic-with-connection | A basic standard flow that using custom python tool calls Azure OpenAI with connection info stored in custom connection | |
conditional-flow-for-if-else | This example is a conditional flow for if-else scenario | |
conditional-flow-for-switch | This example is a conditional flow for switch scenario | |
customer-intent-extraction | This sample is using OpenAI chat model(ChatGPT/GPT4) to identify customer intent from customer's question | |
describe-image | A flow that take image input, flip it horizontally and uses OpenAI GPT-4V tool to describe it | |
flow-with-additional-includes | User sometimes need to reference some common files or folders, this sample demos how to solve the problem using additional_includes | |
flow-with-symlinks | User sometimes need to reference some common files or folders, this sample demos how to solve the problem using symlinks | |
gen-docstring | This example can help you automatically generate Python code's docstring and return the modified code | |
maths-to-code | Math to Code is a project that utilizes the power of the chatGPT model to generate code that models math questions and then executes the generated code to obtain the final numerical answer | |
named-entity-recognition | A flow that perform named entity recognition task | |
question-simulation | This question simulation flow is used to generate suggestions for the next question based on the previous chat history | |
web-classification | This is a flow demonstrating multi-class classification with LLM |
Evaluation flows
path | status | description |
---|---|---|
eval-basic | This example shows how to create a basic evaluation flow | |
eval-chat-math | This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically | |
eval-classification-accuracy | This is a flow illustrating how to evaluate the performance of a classification system | |
eval-entity-match-rate | This is a flow evaluates: entity match rate | |
eval-groundedness | This is a flow leverage llm to eval groundedness: whether answer is stating facts that are all present in the given context | |
eval-multi-turn-metrics | This evaluation flow will evaluate a conversation by using Large Language Models (LLM) to measure the quality of the responses | |
eval-perceived-intelligence | This is a flow leverage llm to eval perceived intelligence | |
eval-qna-non-rag | This is a flow evaluating the Q&A systems by leveraging Large Language Models (LLM) to measure the quality and safety of responses | |
eval-qna-rag-metrics | This is a flow evaluating the Q&A RAG (Retrieval Augmented Generation) systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of responses | |
eval-single-turn-metrics | This evaluation flow will evaluate a question and answer pair by using Large Language Models (LLM) to measure the quality of the answer | |
eval-summarization | This flow implements a reference-free automatic abstractive summarization evaluation across four dimensions: fluency, coherence, consistency, relevance |
Chat flows
path | status | description |
---|---|---|
chat-basic | This example shows how to create a basic chat flow | |
chat-math-variant | This is a prompt tuning case with 3 prompt variants for math question answering | |
chat-with-image | This flow demonstrates how to create a chatbot that can take image and text as input | |
chat-with-pdf | This is a simple flow that allow you to ask questions about the content of a PDF file and get answers | |
chat-with-wikipedia | This flow demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message | |
use_functions_with_chat_models | This flow covers how to use the LLM tool chat API in combination with external functions to extend the capabilities of GPT models |
Tool Use Cases (Tool Use Cases)
path | status | description |
---|---|---|
cascading-inputs-tool-showcase | This is a flow demonstrating the use of a tool with cascading inputs which frequently used in situations where the selection in one input field determines what subsequent inputs should be shown, and it helps in creating a more efficient, user-friendly, and error-free input process | |
custom-strong-type-connection-package-tool-showcase | This is a flow demonstrating the use of a package tool with custom string type connection which provides a secure way to manage credentials for external APIs and data sources, and it offers an improved user-friendly and intellisense experience compared to custom connections | |
custom-strong-type-connection-script-tool-showcase | This is a flow demonstrating the use of a script tool with custom string type connection which provides a secure way to manage credentials for external APIs and data sources, and it offers an improved user-friendly and intellisense experience compared to custom connections | |
custom_llm_tool_showcase | This is a flow demonstrating how to use a custom_llm tool, which enables users to seamlessly connect to a large language model with prompt tuning experience using a PromptTemplate | |
dynamic-list-input-tool-showcase | This is a flow demonstrating how to use a tool with a dynamic list input |
Connections (connections)
path | status | description |
---|---|---|
connections | This folder contains example YAML files for creating connection using pf cli |
SDK examples
path | status | description |
---|---|---|
quickstart.ipynb | A quickstart tutorial to run a flow and evaluate it. | |
quickstart-azure.ipynb | A quickstart tutorial to run a flow in Azure AI and evaluate it. | |
flow-as-function.ipynb | This guide will walk you through the main scenarios of executing flow as a function. | |
pipeline.ipynb | Create pipeline using components to run a distributed job with tensorflow | |
cloud-run-management.ipynb | Flow run management in Azure AI | |
run-management.ipynb | Flow run management | |
trace-autogen-groupchat.ipynb | Tracing LLM calls in autogen group chat application | |
otlp-trace-collector.ipynb | A tutorial on how to levarage custom OTLP collector. | |
trace-langchain.ipynb | Tracing LLM calls in langchain application | |
trace-llm.ipynb | Tracing LLM application | |
connection.ipynb | Manage various types of connections using sdk | |
flex-flow-quickstart-azure.ipynb | A quickstart tutorial to run a flex flow and evaluate it in azure. | |
flex-flow-quickstart.ipynb | A quickstart tutorial to run a flex flow and evaluate it. | |
chat-stream-with-async-flex-flow.ipynb | A quickstart tutorial to run a class based flex flow in stream mode and evaluate it. | |
chat-with-class-based-flow-azure.ipynb | A quickstart tutorial to run a class based flex flow and evaluate it in azure. | |
chat-with-class-based-flow.ipynb | A quickstart tutorial to run a class based flex flow and evaluate it. | |
chat-stream-with-flex-flow.ipynb | A quickstart tutorial to run a class based flex flow in stream mode and evaluate it. | |
langchain-eval.ipynb | A tutorial to converting LangChain criteria evaluator application to flex flow. | |
prompty-quickstart.ipynb | A quickstart tutorial to run a prompty and evaluate it. | |
chat-with-prompty.ipynb | A quickstart tutorial to run a chat prompty and evaluate it. | |
prompty-output-format.ipynb | ||
chat-with-pdf-azure.ipynb | A tutorial of chat-with-pdf flow that executes in Azure AI | |
chat-with-pdf.ipynb | A tutorial of chat-with-pdf flow that allows user ask questions about the content of a PDF file and get answers |
Contributing
We welcome contributions and suggestions! Please see the contributing guidelines for details.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. Please see the code of conduct for details.