litellm

0

Описание

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)

Языки

  • Python49,8%
  • JavaScript45,7%
  • TypeScript3,5%
  • HTML0,9%
  • Остальные0,1%
2 года назад
2 года назад
2 года назад
2 года назад
2 года назад
3 года назад
README.md

🚅 LiteLLM

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

OpenAI Proxy Server | Hosted Proxy (Preview) | Enterprise Tier

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages:

  • Translate inputs to provider's
    completion
    ,
    embedding
    , and
    image_generation
    endpoints
  • Consistent output, text responses will always be available at
    ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model OpenAI Proxy Server

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

🚨 Stable Release: Use docker images with:

main-stable
tag. These run through 12 hr load tests (1k req./min).

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

Important

LiteLLM v1.0.0 now requires

openai>=1.0.0
. Migration guide here

Open In Colab

Call any model supported by a provider, with

model=<provider_name>/<model_name>
. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

Streaming (Docs)

liteLLM supports streaming the model response back, pass

stream=True
to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

OpenAI Proxy - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

Step 1: Start litellm proxy

Step 2: Make ChatCompletions Request to Proxy

Proxy Key Management (Docs)

UI on

/ui
on your proxy server ui_3

Set budgets and rate limits across multiple projects

POST /key/generate

Request

Expected Response

Supported Providers (Docs)

ProviderCompletionStreamingAsync CompletionAsync StreamingAsync EmbeddingAsync Image Generation
openai
azure
aws - sagemaker
aws - bedrock
google - vertex_ai [Gemini]
google - palm
google AI Studio - gemini
mistral ai api
cloudflare AI Workers
cohere
anthropic
huggingface
replicate
together_ai
openrouter
ai21
baseten
vllm
nlp_cloud
aleph alpha
petals
ollama
deepinfra
perplexity-ai
Groq AI
Deepseek
anyscale
IBM - watsonx.ai
voyage ai
xinference [Xorbits Inference]

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm poetry install -E extra_proxy -E proxy

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests poetry run flake8 poetry run pytest .

Step 4: Submit a PR with your changes! 🚀

  • push your fork to your GitHub repo
  • submit a PR from there

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under the LiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors