🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
- Jupyter Notebook
29Обновлено 7 месяцев назад
Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
llmragllmopsprompt-engineeringtestingpromptsevaluation-frameworkevaluationllm-evalcicdci-cdcillm-evaluationllm-evaluation-frameworkprompt-testing- TypeScript
01Обновлено 6 месяцев назад
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
- Jupyter Notebook
01Обновлено 6 месяцев назад
The Security Toolkit for LLM Interactions
llmtransformerslarge-language-modelschatgptllmopsprompt-engineeringllm-securityadversarial-machine-learningprompt-injectionsecurity-tools- Python
01Обновлено 7 месяцев назад
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
- JavaScript
00Обновлено 5 месяцев назад
Structured Text Generation
- Python
00Обновлено 5 месяцев назад
LLMFlows - Simple, Explicit and Transparent LLM Apps
pythonllmmachine-learningaichatgptopenaillmsllmopsllm-inferencegpt-4vector-databasequestion-answeringprompt-engineering- Python
00Обновлено 7 месяцев назад
Prompt management and metrics tracking for LLM apps
- Python
00Обновлено 7 месяцев назад
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
- Jupyter Notebook
00Обновлено 7 месяцев назад