The fastest && easiest LLM security guardrails for CX AI Agents and applications.
-
Updated
Jan 6, 2025 - Python
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-guard topic page so that developers can more easily learn about it.
To associate your repository with the llm-guard topic, visit your repo's landing page and select "manage topics."