Reverse Dependencies of llm-guard
The following projects have a declared dependency on llm-guard:
- allow-agent — A lightweight Python framework for agent content moderation.
- askpythia — AI Hallucination tool including basic validators
- CandyLLM — CandyLLM: Unified framework for HuggingFace and OpenAI Text-generation Models
- llm-guard — LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
- rqle-ai-langchain-util — Library facilitating the integration of different LLM providers in LangChain (e.g. `ollama`, `Google Gemini`, etc).
1