Internal Tools
Safe defaults for internal employee-facing AI tools.
Contributors
Overview
The Internal Tools profile is specifically engineered for applications used by employees within your organization. Unlike public-facing bots, internal tools often need to handle sensitive company data (code, docs, strategy) while preventing that data from leaking to external model providers or logs.
This profile focuses heavily on outbound data security—ensuring that secrets, keys, and proprietary PII don't accidentally leave your secure perimeter via the LLM prompt or logs.
Included Guardrails
5 RulesPII Detection Guardrail
Detects and optionally redacts personally identifiable information in user input.
Internal Data Leak Guardrail
Blocks exposure of internal or proprietary information.
System Prompt Leak Guardrail
Prevents attempts to extract system or developer prompts.
Secrets in Logs Guardrail
Prevents secrets and credentials from being logged.
Model Version Pin Guardrail
Prevents unintended model version changes.
Key Benefits
Data Leak Prevention
Aggressively scans for API keys, internal hostnames, and proprietary data markers.
Logging Safety
Ensures that sensitive inputs are redacted before being written to any application logs.
Model Stability
Pins model versions to prevent unexpected behavioral changes in internal workflows.
Wait, when should I use this?
Integration
profile: internal-tools
overrides:
internal-data-leak:
patterns:
- "CONFIDENTIAL"
- "INTERNAL USE ONLY"
secrets-in-logs:
redact_mode: "hash"Frequently Asked Questions
Can I use this for customer-facing bots?
It is not ideal. This profile prioritizes protecting company data over filtering NSFW content or jailbreaks, which are more critical for public bots.
Does it block all PII?
It warns on PII but allows employee names/emails if configured, assuming internal usage context.