llm-guard
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
- llm
- language
- model
- security
- adversarial
- attacks
- prompt
- injection
- leakage
- PII
- detection
- self-hardening
- firewall
- adversarial-machine-learning
- chatgpt
- large-language-models
- llm-security
- llmops
- prompt-engineering
- prompt-injection
- security-tools
- transformers