promptfoo
Описание
Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
Языки
TypeScript
- JavaScript
- Dockerfile
- CSS
- Python
- HTML
promptfoo: test your LLM app locally
is a tool for testing and evaluating LLM output quality.
With promptfoo, you can:
- Build reliable prompts, models, and RAGs with benchmarks specific to your use-case
- Speed up evaluations with caching, concurrency, and live reloading
- Score outputs automatically by defining metrics
- Use as a CLI, library, or in CI/CD
- Use OpenAI, Anthropic, Azure, Google, HuggingFace, open-source models like Llama, or integrate custom API providers for any LLM API
The goal: test-driven LLM development instead of trial-and-error.
npx promptfoo@latest init
» View full documentation «
promptfoo produces matrix views that let you quickly evaluate outputs across many prompts and inputs:
It works on the command line too:
Why choose promptfoo?
There are many different ways to evaluate prompts. Here are some reasons to consider promptfoo:
- Battle-tested: promptfoo was built to eval & improve LLM apps serving over 10 million users in production. The tooling is flexible and can be adapted to many setups.
- Simple, declarative test cases: Define your evals without writing code or working with heavy notebooks.
- Language agnostic: Use Javascript, Python, or whatever else you're working in.
- Share & collaborate: Built-in share functionality & web viewer for working with teammates.
- Open-source: LLM evals are a commodity and should be served by 100% open-source projects with no strings attached.
- Private: This software runs completely locally. Your evals run on your machine and talk directly with the LLM.
Workflow
Start by establishing a handful of test cases - core use cases and failure cases that you want to ensure your prompt can handle.
As you explore modifications to the prompt, use
to rate all outputs. This ensures the prompt is actually improving overall.
As you collect more examples and establish a user feedback loop, continue to build the pool of test cases.
Usage
To get started, run this command:
npx promptfoo@latest init
This will create some placeholders in your current directory:
and
.
After editing the prompts and variables to your liking, run the eval command to kick off an evaluation:
npx promptfoo@latest eval
Configuration
The YAML configuration format runs each prompt through a series of example inputs (aka "test case") and checks if they meet requirements (aka "assert").
See the Configuration docs for a detailed guide.
prompts: [prompt1.txt, prompt2.txt]providers: [openai:gpt-3.5-turbo, ollama:llama2:70b]tests: - description: 'Test translation to French' vars: language: French input: Hello world assert: - type: contains-json - type: javascript value: output.length < 100
- description: 'Test translation to German' vars: language: German input: How's it going? assert: - type: llm-rubric value: does not describe self as an AI, model, or chatbot - type: similar value: was geht threshold: 0.6 # cosine similarity
Supported assertion types
See Test assertions for full details.
Deterministic eval metrics
Assertion Type | Returns true if... |
---|---|
| output matches exactly |
| output contains substring |
| output contains substring, case insensitive |
| output matches regex |
| output starts with string |
| output contains any of the listed substrings |
| output contains all list of substrings |
| output contains any of the listed substrings, case insensitive |
| output contains all list of substrings, case insensitive |
| output is valid json (optional json schema validation) |
| output contains valid json (optional json schema validation) |
| provided Javascript function validates the output |
| provided Python function validates the output |
| provided webhook returns
|
| Rouge-N score is above a given threshold |
| Levenshtein distance is below a threshold |
| Latency is below a threshold (milliseconds) |
| Perplexity is below a threshold |
| Cost is below a threshold (for models with cost info such as GPT) |
| Ensure that the function call matches the function's JSON schema |
| Ensure that all tool calls match the tools JSON schema |
Model-assisted eval metrics
Assertion Type | Method |
---|---|
similar | Embeddings and cosine similarity are above a threshold |
classifier | Run LLM output through a classifier |
llm-rubric | LLM output matches a given rubric, using a Language Model to grade output |
answer-relevance | Ensure that LLM output is related to original query |
context-faithfulness | Ensure that LLM output uses the context |
context-recall | Ensure that ground truth appears in context |
context-relevance | Ensure that context is relevant to original query |
factuality | LLM output adheres to the given facts, using Factuality method from OpenAI eval |
model-graded-closedqa | LLM output adheres to given criteria, using Closed QA method from OpenAI eval |
select-best | Compare multiple outputs for a test case and pick the best one |
Every test type can be negated by prepending
. For example,
or
.
Tests from spreadsheet
Some people prefer to configure their LLM tests in a CSV. In that case, the config is pretty simple:
prompts: [prompts.txt]providers: [openai:gpt-3.5-turbo]tests: tests.csv
See example CSV.
Command-line
If you're looking to customize your usage, you have a wide set of parameters at your disposal.
Option | Description |
---|---|
| Paths to prompt files, directory, or glob |
| One of: openai:chat, openai:completion, openai:model-name, localai:chat:model-name, localai:completion:model-name. See API providers |
| Path to output file (csv, json, yaml, html) |
| Path to external test file |
| Path to one or more configuration files. is automatically loaded if present |
| Maximum number of concurrent API calls |
| Truncate console table cells to this length |
| This prefix is prepended to every prompt |
| This suffix is append to every prompt |
| Provider that will conduct the evaluation, if you are using LLM to grade your output |
After running an eval, you may optionally use the
command to open the web viewer:
npx promptfoo view
Examples
Prompt quality
In this example, we evaluate whether adding adjectives to the personality of an assistant bot affects the responses:
npx promptfoo eval -p prompts.txt -r openai:gpt-3.5-turbo -t tests.csv
This command will evaluate the prompts in
, substituing the variable values from
, and output results in your terminal.
You can also output a nice spreadsheet, JSON, YAML, or an HTML file:
Model quality
In the next example, we evaluate the difference between GPT 3 and GPT 4 outputs for a given prompt:
npx promptfoo eval -p prompts.txt -r openai:gpt-3.5-turbo openai:gpt-4 -o output.html
Produces this HTML table:
Usage (node package)
You can also use
as a library in your project by importing the
function. The function takes the following parameters:
-
: the Javascript equivalent of the promptfooconfig.yamltestSuiteinterface EvaluateTestSuite {providers: string[]; // Valid provider name (e.g. openai:gpt-3.5-turbo)prompts: string[]; // List of promptstests: string | TestCase[]; // Path to a CSV file, or list of test casesdefaultTest?: Omit<TestCase, 'description'>; // Optional: add default vars and assertions on test caseoutputPath?: string | string[]; // Optional: write results to file}interface TestCase {// Optional description of what you're testingdescription?: string;// Key-value pairs to substitute in the promptvars?: Record<string, string | string[] | object>;// Optional list of automatic checks to run on the LLM outputassert?: Assertion[];// Additional configuration settings for the promptoptions?: PromptConfig & OutputConfig & GradingConfig;// The required score for this test case. If not provided, the test case is graded pass/fail.threshold?: number;}interface Assertion {type: string;value?: string;threshold?: number; // Required score for passweight?: number; // The weight of this assertion compared to other assertions in the test case. Defaults to 1.provider?: ApiProvider; // For assertions that require an LLM provider} -
: misc options related to how the tests are runoptionsinterface EvaluateOptions {maxConcurrency?: number;showProgressBar?: boolean;generateSuggestions?: boolean;}
Example
exports an
function that you can use to run prompt evaluations.
import promptfoo from 'promptfoo';
const results = await promptfoo.evaluate({ prompts: ['Rephrase this in French: {{body}}', 'Rephrase this like a pirate: {{body}}'], providers: ['openai:gpt-3.5-turbo'], tests: [ { vars: { body: 'Hello world', }, }, { vars: { body: "I'm hungry", }, }, ],});
This code imports the
library, defines the evaluation options, and then calls the
function with these options.
See the full example here, which includes an example results object.
Configuration
- Main guide: Learn about how to configure your YAML file, setup prompt files, etc.
- Configuring test cases: Learn more about how to configure expected outputs and test assertions.
Installation
API Providers
We support OpenAI's API as well as a number of open-source models. It's also to set up your own custom API provider. See Provider documentation for more details.
Development
Here's how to build and run locally:
git clone https://github.com/promptfoo/promptfoo.gitcd promptfoo
npm icd path/to/experiment-with-promptfoo # contains your promptfooconfig.yamlnpx path/to/promptfoo-source eval
The web UI is located in
. To run it in dev mode, run
. This will host the web UI at http://localhost:3000. The web UI expects
to be running separately.
You may also have to set some placeholder envars (it is not necessary to sign up for a supabase account):
NEXT_PUBLIC_SUPABASE_URL=http://NEXT_PUBLIC_SUPABASE_ANON_KEY=abc
Contributions are welcome! Please feel free to submit a pull request or open an issue.
includes several npm scripts to make development easier and more efficient. To use these scripts, run
in the project directory.
Here are some of the available scripts:
: Transpile TypeScript files to JavaScriptbuild
: Continuously watch and transpile TypeScript files on changesbuild:watch
: Run test suitetest
: Continuously run test suite on changestest:watch
: Generate new db migrations (and create the db if it doesn't already exist). Note that after generating a new migration, you'll have todb:generate
to copy the migrations intonpm i
.dist/
: Run existing db migrations (and create the db if it doesn't already exist)db:migrate