speck-llm-observability
📂 This package has been archived. Any bugs or feature requests may not be addressed.
Speck is a livetrace debugging and metrics tracking platform for LLM apps.
Speck streamlines LLM app development with its live debugging and metrics tracking. It simplifies prompt engineering and testing across any LLM, saving you time and enhancing your workflow.
Features
Speck's main features include:
- Live LLM debugging
- LLM observability
- Developer framework for calling models
- OpenAI proxy
Support
Model | Support |
---|---|
OpenAI | ✅ |
AzureOpenAI | ✅ |
Anthropic | ✅ |
Replicate | ✅ |
LiteLLM | ✅ |
The dashboard on the Speck website has 4 main features:
- Home: Dashboard for LLM usage metrics
- Logs: Inspect recent LLM calls
- Playground: Prompt engineer with any model
- Live Debug: Test prompts with on-the-fly debugging
If you have any feature requests or want to stay up to date, please join our Discord community!
Getting Started
Python
pip install speck
Then, you can run something like:
from speck import Speckclient = Speck(api_key=None, api_keys={"openai": "sk-...", "anthropic": "sk-..."})response: Response = client.chat.create( prompt=[{"role": "system", "content": "Count to 5."}], config={"model": "anthropic:claude-2"})
Now, each call will be logged for testing. Read more on our documentation!
Development
Installing Latest Version
pip3 install --upgrade git+https://github.com/speckai/speck.git#subdirectory=src/python
Языки
Python
- JavaScript
- Shell